Here are the latest updates on new versions of ChatGPT and related enhancements that have recently been released. I’ll try to cover everything at once so as not to overload you with unnecessary information.
First — last night a new version 5.4 was launched, although just a couple of days ago version 5.3 appeared. I tested both models and noticed some nuances that no one mentioned before. How do they differ? Simplified, 5.3 responds quickly and without significant delays, while 5.4 works a bit slower — but is much smarter and more perceptive.
Overall, 5.4 is now considered the flagship of the lineup (not counting the pro version, which is really super expensive and also top-tier). The new model is genuinely good: it shows excellent results in tests and behaves very pleasantly in practical use.
Marketers added new “cliffhangers” to responses that encourage users to stay longer in the dialogue and actively test the model’s capabilities. These techniques started being used about two weeks ago — hinting at an upcoming new version release. I won’t go into detail, but I think you noticed small “teasers” or questions at the end of some responses.
What should technically distinguish the new model is an extended context window up to one million tokens and the ability to interrupt responses with inserted additional information during generation. But there are nuances: after about 258,000 tokens, the context starts to “shift,” and neural network hallucinations grow exponentially. Also, tokens after roughly 272,000 are charged at double price (2 tokens per one). The idea of such an enormous context seems tempting, but in practice, it’s not so simple.
In the interface or via Codex, I personally couldn’t switch the context above 258,000 tokens — perhaps this is implemented differently in the paid version or on other plans. I was only able to enable this mode through the API. So, no miracles should be expected; for most users, the maximum is around 258K tokens.
Regarding cost: an incoming request costs about $2.50, and an outgoing response about $15 per model use.
Another pleasant point — tasks in Codex work great with this model. It’s slightly more expensive than version 5.3 but shows better results.
Second update — a plugin for Excel has appeared! Great news for accounting or anyone working actively with spreadsheets. It seems to be a partner project, and there are few reviews so far, but the idea is excellent.
If you specifically need Excel — this add-on will be quite useful. I’ve long switched to Google Sheets myself, so I don’t need it personally, but for those who rely on this tool — it will definitely be helpful.
Third innovation — the open orchestrator Symphony. It connects to your Kanban or task management system and automates project workflows. The agent monitors the task board, handling each card separately: creating an isolated virtual environment and loading all project code there. Then it involves an AI agent (like Codex), which writes code based on the task. Afterward, Symphony gathers proof of work completion: runs autotests, collects metrics, and even records a video showing how changes look on the screen.
If everything goes well — the system automatically creates a pull request and integrates the finished functionality into the project. Very convenient for automating development and testing — you can try it out right now.
Here is the link to GitHub for those interested in exploring further or testing it themselves.
Created with n8n:
https://cutt.ly/n8n
Created with syllaby:
https://cutt.ly/syllaby
