AboutBlogContact
DevOpsAugust 20, 2012 6 min read 15

The DevOps Origin: How the Cult of Continuous Integration (Jenkins) Began in Our Labs

AunimedaAunimeda
📋 Table of Contents

The DevOps Origin: How the Cult of Continuous Integration (Jenkins) Began in Our Labs

Before CI/CD, our deployment process in 2010 was:

  1. Open FileZilla (FTP client)
  2. Navigate to the project directory
  3. Select changed files (manually, by memory)
  4. Upload to the server
  5. Hope we didn't miss any files
  6. Refresh the production site
  7. Discover the bug we introduced
  8. Roll back by... uploading the old files (which we may or may not have kept)

This was the industry standard. We weren't unusual. The most common "improvement" was using SSH and rsync instead of FTP, but the manual selection of changed files and the lack of automated testing remained.

The term "DevOps" was coined at the Agile 2008 conference. The first DevOpsDays conference happened in 2009. The concept spread slowly — by 2012 it was a movement, not yet a job title.

We adopted Continuous Integration seriously in 2012. It changed how we worked more fundamentally than almost any other technical decision we made that decade.


Jenkins: The CI Server

Jenkins started as Hudson at Sun Microsystems. After Oracle acquired Sun in 2010, a fork occurred and the community renamed it Jenkins. Jenkins 1.0 had been running since 2004 (as Hudson). By 2012 it was the dominant open-source CI server.

Installation was simple — Jenkins was a Java WAR file:

# Install Jenkins on Ubuntu
wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install jenkins

# Jenkins runs on port 8080 by default
# Open browser: http://your-server:8080

Then configure a job through the web UI:

<!-- Jenkins job config.xml (behind the GUI) -->
<project>
  <scm class="hudson.plugins.git.GitSCM">
    <userRemoteConfigs>
      <hudson.plugins.git.UserRemoteConfig>
        <url>git@github.com:aunimeda/project.git</url>
        <credentialsId>github-deploy-key</credentialsId>
      </hudson.plugins.git.UserRemoteConfig>
    </userRemoteConfigs>
    <branches>
      <hudson.plugins.git.BranchSpec>
        <name>*/main</name>
      </hudson.plugins.git.BranchSpec>
    </branches>
  </scm>
  
  <triggers>
    <!-- Poll SCM every minute, build if changes found -->
    <hudson.triggers.SCMTrigger>
      <spec>* * * * *</spec>
    </hudson.triggers.SCMTrigger>
  </triggers>
  
  <builders>
    <hudson.tasks.Shell>
      <command>
        # Install dependencies
        composer install --no-dev --optimize-autoloader
        npm install
        
        # Run tests
        ./vendor/bin/phpunit tests/
        
        # Build frontend assets
        npm run build
      </command>
    </hudson.tasks.Shell>
  </builders>
  
  <publishers>
    <!-- Email notification on failure -->
    <hudson.tasks.Mailer>
      <recipients>team@aunimeda.com</recipients>
      <dontNotifyEveryUnstableBuild>false</dontNotifyEveryUnstableBuild>
      <sendToIndividuals>true</sendToIndividuals>
    </hudson.tasks.Mailer>
  </publishers>
</project>

The first pipeline: checkout code, run PHPUnit tests, send email if anything failed. No deployment yet. Just "does the code work?"


Writing the First Tests

Before CI, we barely wrote automated tests. Manual QA before deployment. After installing Jenkins, "does the CI pipeline pass?" became a daily part of work. To have a meaningful CI pipeline, you needed tests.

<?php
// PHPUnit tests - 2012 style
// tests/ProductTest.php

class ProductTest extends PHPUnit_Framework_TestCase {
    
    protected $db;
    
    public function setUp() {
        // Use a test database
        $this->db = new PDO('mysql:host=localhost;dbname=test', 'test', 'test');
        // Seed with fixtures
        $this->db->exec(file_get_contents(__DIR__ . '/fixtures/products.sql'));
    }
    
    public function tearDown() {
        // Clean up
        $this->db->exec('TRUNCATE TABLE products');
    }
    
    public function testProductCreation() {
        $product = new Product($this->db, [
            'name'  => 'Test Product',
            'price' => 29.99,
        ]);
        
        $id = $product->save();
        
        $this->assertGreaterThan(0, $id);
        
        // Verify it was actually saved
        $stmt = $this->db->prepare('SELECT * FROM products WHERE id = ?');
        $stmt->execute([$id]);
        $row = $stmt->fetch();
        
        $this->assertEquals('Test Product', $row['name']);
        $this->assertEquals(29.99, (float)$row['price']);
    }
    
    public function testProductPriceValidation() {
        $this->setExpectedException('InvalidArgumentException');
        
        $product = new Product($this->db, [
            'name'  => 'Invalid Product',
            'price' => -10,  // Should throw
        ]);
    }
    
    public function testProductSearch() {
        // Insert test data
        $this->db->exec("INSERT INTO products (name, price) VALUES ('Widget A', 10), ('Widget B', 20), ('Gadget C', 30)");
        
        $repo = new ProductRepository($this->db);
        $results = $repo->search('Widget');
        
        $this->assertCount(2, $results);
        $this->assertEquals('Widget A', $results[0]['name']);
        $this->assertEquals('Widget B', $results[1]['name']);
    }
}

The discipline of writing tests exposed bugs in code we thought was working. The testProductPriceValidation test — we wrote the test before writing the validation, discovered the product class accepted negative prices, and fixed it. Test-driven discovery: we found more bugs from writing tests than from manual QA.


Adding Deployment

Once tests were reliable (8 weeks of setup, writing tests for existing code), we added automated deployment.

#!/bin/bash
# deploy.sh - Jenkins calls this after green tests

set -e  # Exit immediately if any command fails

SERVER="deployer@production.example.com"
APP_DIR="/var/www/application"
RELEASE_DIR="$APP_DIR/releases/$(date +%Y%m%d%H%M%S)"
CURRENT_LINK="$APP_DIR/current"

echo "=== Deploying to production ==="

# Create release directory on server
ssh $SERVER "mkdir -p $RELEASE_DIR"

# Sync files (rsync is fast - only changed files)
rsync -avz --exclude='.git' \
  --exclude='node_modules' \
  --exclude='vendor' \
  ./ $SERVER:$RELEASE_DIR/

# Install dependencies on server
ssh $SERVER "cd $RELEASE_DIR && composer install --no-dev --optimize-autoloader"

# Run database migrations
ssh $SERVER "cd $RELEASE_DIR && php artisan migrate --force"

# Atomic symlink switch (zero-downtime deploy)
# nginx/apache serve from 'current' symlink
ssh $SERVER "ln -sfn $RELEASE_DIR $CURRENT_LINK"

# Reload PHP-FPM to pick up new code (faster than restart)
ssh $SERVER "sudo service php5-fpm reload"

echo "=== Deployment complete ==="
echo "Release: $RELEASE_DIR"

# Keep last 5 releases, delete older ones
ssh $SERVER "ls -dt $APP_DIR/releases/*/ | tail -n +6 | xargs rm -rf"

The ln -sfn symlink swap was the zero-downtime deployment technique. Nginx's document root pointed to current/, which pointed to the latest release. The symlink switch was atomic at the filesystem level — there was no moment where current/ pointed to nothing. Requests in flight finished against the old release; new requests hit the new release.


The Pipeline as a Gate

The deeper cultural change was treating the pipeline as a gate.

Before: "I'll deploy this and fix any issues that come up." After: "I can't deploy until the pipeline is green."

This seems trivial. It wasn't. It meant:

  • Merging broken code to main was not acceptable
  • Every developer's machine had to run the full test suite before pushing
  • Infrastructure changes (database schema changes, config changes) were scripted and tested, not applied manually
# Pre-commit hook (local) - run tests before commit
# .git/hooks/pre-commit

#!/bin/bash
echo "Running tests before commit..."
./vendor/bin/phpunit tests/ --stop-on-failure

if [ $? -ne 0 ]; then
    echo "Tests failed. Commit aborted."
    exit 1
fi

echo "All tests passed."

Pre-commit hooks enforced tests locally. The Jenkins pipeline enforced them on the server. Two gates: local and remote.


What DevOps Actually Meant in 2012

The "DevOps" label was new, but the problems were old. "Dev" (development) and "Ops" (operations/sysadmin) had always had friction: developers shipped code quickly; ops teams were responsible for stability and pushed back on rapid changes.

The DevOps movement's insight: make deployment so automated and reliable that it's not a scary event. If deploying is a button click that takes 5 minutes and automatically rolls back on failure, you can deploy 10 times a day instead of once a month. The 10-times-a-day deployment has smaller changes, smaller risk per deploy, and faster bug detection.

By 2013 the Puppet/Chef configuration management tools and AWS CloudFormation made the infrastructure itself version-controlled and repeatable. "Infrastructure as Code" became the phrase. Jenkins pipelines became standard. The cultural shift from "ops as a gatekeeper" to "ops as a platform team" was underway.

The 2012 Jenkins setup we ran was primitive by 2024 standards (modern GitHub Actions or GitLab CI are far more capable). But the foundational idea — automated tests, automated deployment, pipeline as a gate — is unchanged. We still run from that same playbook, just with better tools.

Read Also

Cloud Hosting Comparison 2026: AWS vs GCP vs Azure vs Hetzner vs Vercelaunimeda
DevOps

Cloud Hosting Comparison 2026: AWS vs GCP vs Azure vs Hetzner vs Vercel

Which cloud provider to choose for your startup in 2026. Real pricing comparison, performance benchmarks, and the hosting stack that makes sense at each stage.

DevOps for Startups - What You Actually Need (And What to Skip)aunimeda
DevOps

DevOps for Startups - What You Actually Need (And What to Skip)

Most startup DevOps guides tell you to set up Kubernetes. You don't need Kubernetes. Here's the minimal, effective DevOps setup for a product with under 100k users.

Docker and CI/CD for a Small Dev Team: What We Actually Ship in Productionaunimeda
DevOps

Docker and CI/CD for a Small Dev Team: What We Actually Ship in Production

Not every team needs Kubernetes. Here's the Docker-based CI/CD setup we run for 6 production projects with a team of 8 - GitHub Actions, Docker Compose, Nginx, and zero Kubernetes.

Need IT development for your business?

We build websites, mobile apps and AI solutions. Free consultation.

Get Consultation All articles