AirTrunk acquires Lumina CloudInfra to enter India with 600MW of planned capacity
Back to Tutorials
techTutorialintermediate

AirTrunk acquires Lumina CloudInfra to enter India with 600MW of planned capacity

April 20, 20265 views5 min read

Learn to build a data center infrastructure management system that tracks capacity and resources, similar to what AirTrunk and Lumina CloudInfra use to manage their hyperscale operations in India.

Introduction

In the rapidly expanding data center industry, companies like AirTrunk and Lumina CloudInfra are positioning themselves to capture market share in emerging regions like India. This tutorial will guide you through creating a data center infrastructure management system using Python and REST APIs, similar to what these companies might employ to manage their growing hyperscale operations.

This hands-on tutorial will teach you how to build a basic data center infrastructure monitoring system that can track capacity, manage resources, and provide insights for scaling operations. You'll learn to implement core concepts like resource allocation, capacity planning, and API-based data management.

Prerequisites

  • Basic understanding of Python programming
  • Knowledge of REST APIs and HTTP requests
  • Familiarity with JSON data structures
  • Python virtual environment setup
  • Basic understanding of data center concepts (servers, storage, networking)

Step-by-Step Instructions

1. Set Up Your Development Environment

First, we'll create a virtual environment to isolate our project dependencies.

python -m venv datacenter_env
source datacenter_env/bin/activate  # On Windows: datacenter_env\Scripts\activate
pip install flask requests

Why this step? Using a virtual environment ensures that our project dependencies don't interfere with other Python projects on your system.

2. Create the Data Center Model

Next, we'll define a basic data center model that represents capacity and resources.

import json
from datetime import datetime

class DataCenter:
    def __init__(self, name, capacity_mw, location):
        self.name = name
        self.capacity_mw = capacity_mw
        self.location = location
        self.resources = []
        self.created_at = datetime.now().isoformat()

    def add_resource(self, resource):
        self.resources.append(resource)

    def to_dict(self):
        return {
            'name': self.name,
            'capacity_mw': self.capacity_mw,
            'location': self.location,
            'resources': self.resources,
            'created_at': self.created_at
        }

    def __str__(self):
        return f"DataCenter(name={self.name}, capacity={self.capacity_mw}MW, location={self.location})"

# Create a sample data center
sample_dc = DataCenter("AirTrunk India Hub", 600, "Mumbai")
print(sample_dc)

Why this step? This creates a foundation for representing data center infrastructure, which is crucial for managing capacity planning and resource allocation.

3. Implement Resource Management

Now we'll add functionality to manage server resources within our data center.

class ServerResource:
    def __init__(self, server_id, cpu_cores, memory_gb, storage_gb, status="active"):
        self.server_id = server_id
        self.cpu_cores = cpu_cores
        self.memory_gb = memory_gb
        self.storage_gb = storage_gb
        self.status = status
        self.allocated = False

    def to_dict(self):
        return {
            'server_id': self.server_id,
            'cpu_cores': self.cpu_cores,
            'memory_gb': self.memory_gb,
            'storage_gb': self.storage_gb,
            'status': self.status,
            'allocated': self.allocated
        }

# Add some servers to our data center
sample_dc.add_resource(ServerResource("SRV-001", 32, 128, 2000))
sample_dc.add_resource(ServerResource("SRV-002", 16, 64, 1000))

print(json.dumps(sample_dc.to_dict(), indent=2))

Why this step? Resource management is essential for tracking capacity utilization and ensuring efficient allocation of computing resources, similar to what AirTrunk would need for its 600MW infrastructure.

4. Create REST API Endpoints

We'll build a Flask-based API to manage our data center infrastructure.

from flask import Flask, jsonify, request

app = Flask(__name__)

# In-memory storage (in production, use a database)
data_centers = {}

@app.route('/datacenters', methods=['POST'])
def create_datacenter():
    data = request.get_json()
    dc = DataCenter(data['name'], data['capacity_mw'], data['location'])
    data_centers[dc.name] = dc
    return jsonify(dc.to_dict()), 201

@app.route('/datacenters/', methods=['GET'])
def get_datacenter(name):
    dc = data_centers.get(name)
    if dc:
        return jsonify(dc.to_dict())
    return jsonify({'error': 'Data center not found'}), 404

@app.route('/datacenters//resources', methods=['POST'])
def add_resource(name):
    dc = data_centers.get(name)
    if not dc:
        return jsonify({'error': 'Data center not found'}), 404
    
    data = request.get_json()
    resource = ServerResource(data['server_id'], data['cpu_cores'], data['memory_gb'], data['storage_gb'])
    dc.add_resource(resource)
    return jsonify(resource.to_dict()), 201

if __name__ == '__main__':
    app.run(debug=True)

Why this step? REST APIs enable scalable, standardized communication between systems, which is essential for managing distributed data center infrastructure across different regions.

5. Test the API

Let's test our API using curl commands to verify functionality.

# Create a data center
curl -X POST http://localhost:5000/datacenters \
  -H "Content-Type: application/json" \
  -d '{"name":"AirTrunk India Hub","capacity_mw":600,"location":"Mumbai"}'

# Add a server resource
curl -X POST http://localhost:5000/datacenters/AirTrunk\ India\ Hub/resources \
  -H "Content-Type: application/json" \
  -d '{"server_id":"SRV-001","cpu_cores":32,"memory_gb":128,"storage_gb":2000}'

Why this step? Testing ensures our implementation works correctly and provides a foundation for more complex data center management operations.

6. Add Capacity Planning Functionality

Let's enhance our system with capacity planning features to help with scaling decisions.

def calculate_utilization(dc):
    total_capacity = dc.capacity_mw
    used_capacity = sum(100 for _ in dc.resources)  # Simplified for demo
    utilization = (used_capacity / total_capacity) * 100
    return utilization

def get_capacity_recommendation(dc):
    utilization = calculate_utilization(dc)
    if utilization > 80:
        return "High utilization - consider expansion"
    elif utilization > 60:
        return "Moderate utilization - monitor closely"
    else:
        return "Low utilization - consider optimization"

# Test the capacity planning
print(f"Utilization: {calculate_utilization(sample_dc)}%")
print(f"Recommendation: {get_capacity_recommendation(sample_dc)}")

Why this step? Capacity planning is crucial for data center operations, especially when expanding into new markets like India where companies like AirTrunk are investing heavily.

Summary

This tutorial demonstrated how to build a foundational data center infrastructure management system using Python and REST APIs. You've learned to create data center models, manage server resources, implement API endpoints, and perform basic capacity planning calculations.

While this is a simplified implementation, it mirrors the core concepts that companies like AirTrunk and Lumina CloudInfra would use to manage their expanding hyperscale data center operations. The system can be extended with database integration, advanced monitoring, and more sophisticated capacity planning algorithms to handle real-world complexity.

For production use, you would want to integrate with actual monitoring systems, implement proper authentication, and scale the architecture to handle large volumes of data center operations.

Source: TNW Neural

Related Articles