Tech

All you need to know about OpenAI Sora

Published

on

OpenAI’s text-to-video model, Sora, has taken the world by surprise.

What is OpenAI Sora?

According to the OpenAI website, Sora is a text-to-video model that can generate videos for up to a minute while maintaining visual quality and adherence to the user’s prompt.

It was created by OpenAI, the US-based artificial intelligence research organization that also created the famous chatbot ChatGPT.

What does Sora do?

Sora will be a game-changer in the creative scene, particularly in filmmaking. Using descriptive text instructions, previously imagined worlds will be brought to life, breaking the limits of creativity and promising new possibilities.

“Sora can generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” the company said.

How does it work?

According to the company, OpenAI Sora deeply understands language, thus enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions. The model can also come up with multiple shots within a single generated video that accurately portrays characters and visual style.

Who can access it?

At the moment, Sora is accessible only to red teamers who will assess the risk areas, and a select number of creatives who will provide useful feedback on how to best advance the model.

“Red teaming” is a structured testing method for finding flaws in AI systems. The process involves simulating attacks on the model to identify weaknesses and vulnerabilities. The goal is to identify ways the system doesn’t work as intended, and then find fixes for the breaks.

Which safety measures have been put in place?

Safety in artificial intelligence is a huge concern for many people, and rightly so. Below are the measures being put in place by OpenAI to make Sora safe:

  1. 1. Building tools to help detect misleading content such as a detection classifier that can tell when Sora generated a video.
  2. 2. Leveraging existing safety methods that OpenAI built for their products that use DALL·E 3, which apply to Sora as well. These include a text classifier that will check and reject text input prompts that violate OpenAI’s usage policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.
  3. 3. Engaging policymakers, educators and artists worldwide to understand their concerns and identify positive use cases for this new technology.

Trending

Exit mobile version