Beyond ChatGPT

The Mind-Blowing AI Breakthroughs Being Developed by OpenAI

Pradeep Singh
7 min readMar 20, 2023

OpenAI is a world-renowned research organization that’s making groundbreaking advancements in the field of artificial intelligence. They’re constantly pushing the boundaries of what’s possible, and their work is shaping the future of technology in countless ways. From developing cutting-edge language models like ChatGPT to creating robots that can perform complex tasks, OpenAI is at the forefront of AI innovation.

Images

OpenAI is constantly pushing the boundaries of AI, and its image-related projects are no exception. Two notable ones are DALL-E 2 and CLIP.

DALL-E 2

In January 2021, OpenAI introduced DALL-E, a system that can generate images from text. DALL-E 2 (released in January 2022) takes it to the next level with higher resolution, greater comprehension, and new capabilities.

DALL-E was created by training neural networks on images and their text description. Through deep learning, it not only understands individual objects but learns from relationships between objects.

A few sample images generated by DALL-E 2 include:

A little mouse, surprised, instant camera flash — DALLE2
A weekend of walks — DALLE2

The DALL-E research has three main outcomes:

  1. It can help people express themselves visually in ways they may not have been able to before.
  2. An AI-generated image can tell us a lot about whether the system understands us, or it’s just repeating what it has been taught.
  3. DALL-E helps humans understand how advanced AI systems see and understand our world. This is a critical part of developing AI that’s useful and safe.

DALL-E 2, a deep-learning model trained to generate digital images from natural language descriptions, has limitations. Incorrectly labelled objects can confuse it, leading to the generation of the wrong image. Additionally, gaps in its training can cause it to produce inaccurate results. However, what’s exciting about DALL-E’s approach is that it can take what it’s learned from a variety of labelled images and apply it to new images.

With 12 billion parameters, DALL-E 2 can create anthropomorphized versions of animals and objects, combine unrelated concepts in plausible ways, render text, and apply transformations to existing images. The latest version generates more realistic and accurate images with four times greater resolution.

CLIP (Contrastive Language–Image Pre-training)

OpenAI introduced a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GPT-2 and GPT-3.

  • Trained on a wide variety of images with natural language supervision
  • Can be instructed in natural language to perform a great variety of classification benchmarks
  • Achieved up to 75% greater robustness than standard vision models
  • Matches the performance of the original ResNet-507 on ImageNet zero-shot without using any of the original 1.28M labelled examples

CLIP builds on a large body of work on zero-shot transfer, natural language supervision, and multimodal learning. The neural network is trained on text paired with images found online. This data is used to create a proxy training task for CLIP: given an image, predict which out of a set of 32,768 randomly sampled text snippets was paired with it in our dataset.

Text

GPT-4

GPT-4 is the latest and most advanced AI system developed by OpenAI. It is designed to take what you are prompted with and generate up to 25k words of text, which is 8 times more than its predecessor, ChatGPT. With its advanced capabilities, GPT-4 can even understand images and express logical ideas about them.

GPT-4 has the power to make dreams, thoughts, and ideas flourish in the text in front of you. It can be used as a tool to get useful tasks done in language, but it is more than just that. GPT-4 is a system that can amplify what every person can do, bringing value to everyday life and ultimately leading to a better quality of life.

One of the most compelling use cases of GPT-4 is education. It can teach a huge range of subjects and is personalized to the skill level of each individual. Imagine giving a 5th grader a personal math tutor with unlimited time and patience.

The development of AI technology has come a long way from the transistor to the computer, the internet, and the semiconductor industry. GPT-4 is just the beginning, and it is already easy to imagine the impact of its successor many generations down the line.

OpenAI has put in a lot of internal guardrails around adversarial usage, unwanted content, and privacy concerns. While GPT-4 is incredibly advanced and sophisticated, it is not perfect and can make mistakes. Therefore, it is important to ensure that the work is being done to your level of expectation.

Audio

OpenAI has also been making significant strides in the realm of audio-based AI. Three notable projects in this domain are Whisper, Jukebox, and MuseNet.

Whisper

Do you want to have human-like speech recognition? Meet Whisper, a neural net from OpenAI that approaches human-level robustness and accuracy in English speech recognition. Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web.

Here’s what makes Whisper stand out:

  • It uses a large and diverse dataset that leads to improved robustness to accents, background noise, and technical language.
  • It enables transcription in multiple languages, as well as translation from those languages into English.

Jukebox

Are you ready for music that’s generated by an AI? Look no further than Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artistic styles. With Jukebox, you can provide the genre, artist, and lyrics as input, and it will output a new music sample produced from scratch.

Here are some of the most popular samples generated by Jukebox:

Classic Pop, in the style of Frank Sinatra
Pop, in the style of Katy Perry
Pop, in the style of Rick Astley

MuseNet

If you’re a music lover, you’ll love MuseNet. It’s a deep neural network that can generate 4-minute musical compositions with 10 different instruments and can combine styles from country to Mozart to the Beatles.

What’s unique about MuseNet?

  • It was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files.
  • It uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text.

Past Innovations

ImageGPT

Just as a large transformer model trained on language can generate coherent text, the same model trained on pixel sequences can generate coherent image completions and samples.

By establishing a correlation between sample quality and image classification accuracy, OpenAI showed that their best generative model also contains features competitive with top convolutional nets in the unsupervised setting.

Solving Rubik’s cube with a robot hand

OpenAI trained a pair of neural networks to solve the Rubik’s Cube with a human-like robot hand. The neural networks are taught entirely in simulation, using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR).

  • The system can handle situations it never saw during training, such as being prodded by a stuffed giraffe.
  • This shows that reinforcement learning isn’t just a tool for virtual tasks, but can solve physical-world problems requiring unprecedented dexterity.
OpenAI’s Robot solving Rubik’s Cube

Solving a Rubik’s Cube one-handed is a challenging task even for humans, and it takes children several years to gain the dexterity required to master it. OpenAI’s robot still hasn’t perfected its technique, as it solves the Rubik’s Cube 60% of the time (and only 20% of the time for a maximally difficult scramble).

Emergent tool use from multi-agent interaction

OpenAI observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training in a new simulated hide-and-seek environment, agents build a series of six distinct strategies and counterstrategies, some of which were not previously known. The self-supervised emergent complexity in this simple environment further suggests that multi-agent co-adaptation may one day produce extremely complex and intelligent behaviour.

In this environment, agents play a team-based hide-and-seek game. Hiders (blue) are tasked with avoiding line-of-sight from the seekers (red), and seekers are tasked with keeping the vision of the hiders. There are objects scattered throughout the environment that hiders and seekers can grab and lock in place, as well as randomly generated immovable rooms and walls that agents must learn to navigate. Before the game begins, hiders are given a preparation phase where seekers are immobilized to give hiders a chance to run away or change their environment.

In conclusion, OpenAI is at the forefront of AI innovation with groundbreaking projects in audio, visual, and text categories, paving the way for a transformative future in technology.

--

--

Pradeep Singh

MLOps Engineer @ Genpact / psrajput.com / Running (10k in 59.12, 5k in 26.15) / Cricket / Trekking / Chess