OpenAI’s Sora was the creepiest app on your phone — now it’s shutting down
Back to Tutorials
techTutorialbeginner

OpenAI’s Sora was the creepiest app on your phone — now it’s shutting down

March 24, 20268 views4 min read

Learn how to create AI video effects using Python and basic computer vision libraries. This tutorial teaches you the fundamentals of video processing and transformation that underlie advanced AI video generation tools.

Introduction

In this tutorial, you'll learn how to work with AI video generation technology similar to what OpenAI's Sora demonstrated. While Sora itself is shutting down, the underlying concepts and tools for generating video content with AI are still very much alive. We'll walk through setting up a basic AI video generation environment using Python and popular libraries. This tutorial will teach you how to create simple video effects and understand the building blocks of AI video generation.

Prerequisites

Before starting this tutorial, you'll need:

  • A computer with internet access
  • Python 3.7 or higher installed
  • Basic understanding of command line operations
  • Approximately 30 minutes to complete

Step-by-Step Instructions

Step 1: Set Up Your Python Environment

Create a new project directory

First, we'll create a dedicated folder for our AI video project. Open your terminal or command prompt and run:

mkdir ai-video-project
 cd ai-video-project

This creates a new folder called 'ai-video-project' and navigates into it. This keeps all our project files organized.

Install required Python packages

We need several libraries to work with AI video generation. Run these commands in your terminal:

pip install opencv-python
pip install numpy
pip install pillow

These packages provide the foundation for video processing, numerical operations, and image handling that we'll need for our AI video experiments.

Step 2: Create Your First AI Video Effect

Set up the basic Python script

Create a new file called video_generator.py in your project directory:

touch video_generator.py

Open this file in your code editor and add the basic imports:

import cv2
import numpy as np
from PIL import Image
import os

These imports give us access to computer vision functions, array operations, image handling, and file system operations.

Write the main video processing function

Add this function to your script:

def create_simple_video_effect(input_path, output_path, effect_type="grayscale"):
    # Read the video file
    cap = cv2.VideoCapture(input_path)
    
    # Get video properties
    fps = int(cap.get(cv2.CAP_PROP_FPS))
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
    
    # Define the codec and create VideoWriter object
    fourcc = cv2.VideoWriter_fourcc(*'mp4v')
    out = cv2.VideoWriter(output_path, fourcc, fps, (width, height))
    
    while True:
        ret, frame = cap.read()
        if not ret:
            break
        
        # Apply effect based on type
        if effect_type == "grayscale":
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
        elif effect_type == "invert":
            frame = cv2.bitwise_not(frame)
        
        # Write the frame to output video
        out.write(frame)
    
    # Release everything
    cap.release()
    out.release()
    cv2.destroyAllWindows()
    
    print(f"Processed video saved to {output_path}")

This function reads an input video, applies a visual effect, and saves the result. It demonstrates how AI video generation works at a basic level - by processing frames one by one and applying transformations.

Step 3: Test Your AI Video Generator

Create a sample video for testing

Before testing our AI effect, we need a video to process. Create a simple test video using Python:

def create_test_video(filename="test_video.mp4"):
    # Create a simple animation
    width, height = 640, 480
    fourcc = cv2.VideoWriter_fourcc(*'mp4v')
    out = cv2.VideoWriter(filename, fourcc, 20.0, (width, height))
    
    for i in range(100):
        # Create a frame with moving circle
        frame = np.zeros((height, width, 3), dtype=np.uint8)
        center_x = int((i * 5) % width)
        center_y = int((i * 3) % height)
        cv2.circle(frame, (center_x, center_y), 30, (0, 255, 0), -1)
        out.write(frame)
    
    out.release()
    print(f"Test video created: {filename}")

This creates a simple animated video with a moving green circle to test our effects on.

Run the complete example

Add this code to the end of your video_generator.py file:

if __name__ == "__main__":
    # Create test video
    create_test_video()
    
    # Apply AI-like effects
    create_simple_video_effect("test_video.mp4", "grayscale_output.mp4", "grayscale")
    create_simple_video_effect("test_video.mp4", "invert_output.mp4", "invert")
    
    print("AI video effects applied successfully!")

When you run this script, it will create a test video and then apply different visual effects to it, simulating how AI video generation tools might process content.

Step 4: Run Your Video Generation Script

Execute your script

In your terminal, run:

python video_generator.py

This will execute your entire script, creating a test video and applying different effects to it. You should see output messages confirming each step.

View your results

After running, you'll have three files in your directory:

  • test_video.mp4 - The original animated video
  • grayscale_output.mp4 - Video with grayscale effect
  • invert_output.mp4 - Video with inverted colors

Open these videos to see how different transformations affect the original content.

Summary

In this tutorial, you've learned how to create a basic AI video generation environment using Python. While we didn't use the actual Sora technology, we've built the foundational understanding of how AI video generation works - by processing video frames and applying transformations. The code demonstrates core concepts like video reading/writing, frame-by-frame processing, and applying visual effects that are fundamental to more advanced AI video generation tools.

This hands-on approach gives you a practical understanding of the building blocks that AI video generation platforms like Sora are built upon. Even though Sora is shutting down, the principles of AI video processing remain relevant and continue to evolve in the field of artificial intelligence.

Related Articles