top of page
  • Ylva

Exploring GPT-4: What We Know and What We Can Expect

On March 14th 2023, OpenAI announced the GPT4 update, and experts say it is a real game changer, transforming the way we work and the way we interact with AI.

What is GPT4?

GPT-4 is the newest version of OpenAI’s language model, released on March 14th 2023. The previous version, GPT-3.5, is the model that has powered ChatGPT since its release in November 2022. At the time of writing this article, GPT-4 requires the paid version of ChatGPT – ChatGPT plus. GPT-4 is also available as an API for developers to build applications and services (watilist).

So what does GPT-4 have that GPT-3.5 doesn’t? In OpenAI’s own words:

Creativity - GPT-4 is more creative and collaborative than ever before. It can generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style.

Visual input - GPT-4 can accept images as inputs and generate captions, classifications, and analyses.

Longer context - GPT-4 is capable of handling over 25,000 words of text, allowing for use cases like long form content creation, extended conversations, and document search and analysis.

Is GPT-4 the same as ChatGPT?

GPT-4 is the engine behind ChatGPT (but currently only the paid version of ChatGPT, the free version uses GPT-3.5), but GPT-4 can be used to power more than just chatbots.

GPT-4 vs GPT-3.5 - what’s the difference?

According to OpenAI, GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.

GPT-4 is trained on significantly more data than GPT-3.5, which enables it to generate even more accurate results.

GPT-4 can take images as well as text as input and it can read and manipulate 25,000 words compared to 3,000 words for GPT-3.5.

As demonstrated in OpenAI-s developer demo - GPT-4 is capable of transforming napkin sketches into actual websites. Based on this mindblowing demo, GPT-4 has actually been compared to transformative products such as Apple’s iPod and iPhone.

Through this demo we can see just how more advanced GPT-4 is compared to its predecessor. With its “chat completions playground” including a system message. This is where you explain to the model what it’s supposed to do. The model will adhere to it very well and in the future, it will get increasingly reliable at steering the model.

With GPT-4, we’re moving away from “raw text in, raw text out” and into a much more structured format which gives the model the opportunity to really listen to the developer and give much better responses.

Why does this matter?

GPT-4 is a game changer compared to GPT-3 because of its ability to read and manipulate 25,000 words at a time, which means that GPT-4 can handle more complex tasks.

Also, GPT-4's ability to transform napkin sketches into actual websites is a significant advancement that has the potential to revolutionize the way digital products are created.

With GPT-4, AI technology becomes increasingly capable of handling tasks that were once done by humans.

For example, it can use visual inputs to describe images and provide step-by-step reasoning based on a chart or graph.

The GPT-4 model is much better at generating and collaborating on creative projects as well. Imagine being able to work with a machine to create music, screenplays, or even write in your unique writing style. That’s next level!

GPT-4 can also make website building a breeze. ChatGPT can already code really well, but it frequently loses context. With GPT-4’s ability to recognize images and increased tokens, the use cases for coding and building websites have widened.

We can now simply present a sketch of a website and it will create a functional website based on the draft. It will also generate content to fill in the gaps.

Use cases for GPT4


Duolingo has launched a new subscription variant called Duolingo Max, which utilizes the GPT-4 model. It features two major additions called "Explain My Answer" and "Roleplay," aimed at enhancing the in-app learning experience. "Explain My Answer" helps users to understand why their answer is incorrect, by chatting with Duolingo's mascot, Duo, who is now an interactive virtual tutor.

The "Roleplay" feature enables users to practice their language skills by engaging in real conversations that mirror real-life communication scenarios. The conversations never repeat, making the experience more realistic and effective.

Be My Eyes

The Danish mobile app Be My Eyes helps the visually impaired to recognize objects and manage everyday situations. It provides in-app assistance to users through an online community of human volunteers, and now GPT-4 has also joined the volunteer team. Users can connect with volunteers via live chat and share photos or videos to get help in situations they find challenging due to their disability.


There’s a new version of Elicit (an AI research assistant that uses language models to automate research workflows. It can find papers you’re looking for, answer your research questions, and summarize key points from a paper.) that uses GPT-4, but it is still in private beta.

The bottom line

The release of OpenAI's GPT-4 language model is a game-changer in the world of AI.

With the ability to handle over 25,000 words, GPT-4 can handle more complex tasks and transform napkin sketches into functional websites, making website building easier than ever before.

The model is also more creative and able to collaborate on various creative projects, such as music composition and screenwriting.

GPT-4 has already found use cases in language learning, visually-impaired assistance, and AI research assistance, with the potential for more applications to come.

Overall, GPT-4 marks a significant step forward in the capabilities of AI technology and has the potential to revolutionize the way we work and interact with machines.


AI in freelance writing.jpeg

Want to read more stories like this?

Sign up for Marketing At Scale to level up within marketing and tech - all within your coffee break

bottom of page