I can surprise you everyday, including myself

Malta
Joined August 2018
Liam Debono retweeted
Nano banana 2 already in preview! Really excited for it since nano banana 1 was a huge and surprising success.
Nano Banana 2 is now in Preview 👀 currently on Media IO
8
15
2
282
Liam Debono retweeted
GPT-5.1 is going to be released on November 24 by @OpenAI
Liam Debono retweeted
Some more big leaks: -GPT-5.1 -GPT-5.1 Pro -GPT-5.1 reasoning The release can't be far off, probably to compete the Gemini 3.0 pro release. h/t @scaling01 for finding it first!
BIG OPENAI NEWS: GPT-5.1, GPT-5.1 Reasoning and GPT-5.1 Pro GPT-5.1: "Flagship model for the latest generation of ChatGPT" GPT-5.1 Reasoning: "Thinks longer for better answers" GPT-5.1: "Research-grade intelligence"
Liam Debono retweeted
Hi everyone, Grand Theft Auto VI will now release on Thursday, November 19, 2026. We are sorry for adding additional time to what we realize has been a long wait, but these extra months will allow us to finish the game with the level of polish you have come to expect and deserve.
Liam Debono retweeted
You can now interrupt long-running queries and add new context without restarting or losing progress. This is especially useful for refining deep research or GPT-5 Pro queries as the model will adjust its response with your new requirements. Just hit update in the sidebar and type in any additional details or clarifications.
Liam Debono retweeted
Day 17 of learning system design I learnt about denormalization. In database design, we’re always told to normalize, to break data into smaller tables so it’s clean and consistent. But in real-world systems, too much normalization can slow things down. That’s where denormalization comes in. It means adding some redundancy back to your database to make reads faster. Instead of joining multiple tables every time, you keep certain data together, even if it’s repeated. For example, instead of joining the orders and customers tables each time, you can just store the customer’s name inside the orders table. Reads become faster, but updates get a bit harder. Denormalization is all about trade-offs. You get faster reads but slower writes and a higher chance of inconsistency if you’re not careful. It’s common in large systems where speed matters more than strict normalization, like analytics or caching systems. Normalize for data integrity. Denormalize for performance. System design is about knowing when to use which. Hold me accountable ❤️
Day 16 of learning system design I learnt about Sharding and how it connects to Database Replication. Sharding basically means splitting a large database into smaller, more manageable pieces called shards. Each shard holds a portion of the data, which helps distribute the load and improve performance, especially when your app scales. Replication, on the other hand, means making copies of your database across multiple servers. It ensures data availability, fault tolerance, and quick recovery if one server goes down. Here’s how they connect: Replication keeps copies of each shard across multiple servers, so even if one shard fails, the system still runs smoothly using the replicated copy. Sharding handles scale, replication handles reliability, together, they keep large systems fast and resilient. Still using the System Design Primer on GitHub for this journey, and it’s been eye-opening Hold me accountable❤️
4
14
1
189
Liam Debono retweeted
Replying to @thegenioo
the website is liaobots.work/en
1
1
6
Liam Debono retweeted
You guys must be very jealous of me rn
10
2
145
Liam Debono retweeted
I would like to clarify a few things. First, the obvious one: we do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market. If one company fails, other companies will do good work. What we do think might make sense is governments building (and owning) their own AI infrastructure, but then the upside of that should flow to the government as well. We can imagine a world where governments decide to offtake a lot of computing power and get to decide how to use it, and it may make sense to provide lower cost of capital to do so. Building a strategic national reserve of computing power makes a lot of sense. But this should be for the government’s benefit, not the benefit of private companies. The one area where we have discussed loan guarantees is as part of supporting the buildout of semiconductor fabs in the US, where we and other companies have responded to the government’s call and where we would be happy to help (though we did not formally apply). The basic idea there has been ensuring that the sourcing of the chip supply chain is as American as possible in order to bring jobs and industrialization back to the US, and to enhance the strategic position of the US with an independent supply chain, for the benefit of all American companies. This is of course different from governments guaranteeing private-benefit datacenter buildouts. There are at least 3 “questions behind the question” here that are understandably causing concern. First, “How is OpenAI going to pay for all this infrastructure it is signing up for?” We expect to end this year above $20 billion in annualized revenue run rate and grow to hundreds of billion by 2030. We are looking at commitments of about $1.4 trillion over the next 8 years. Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there; we are quite excited about our upcoming enterprise offering for example, and there are categories like new consumer devices and robotics that we also expect to be very significant. But there are also new categories we have a hard time putting specifics on like AI that can do scientific discovery, which we will touch on later. We are also looking at ways to more directly sell compute capacity to other companies (and people); we are pretty sure the world is going to need a lot of “AI cloud”, and we are excited to offer this. We may also raise more equity or debt capital in the future. But everything we currently see suggests that the world is going to need a great deal more computing power than what we are already planning for. Second, “Is OpenAI trying to become too big to fail, and should the government pick winners and losers?” Our answer on this is an unequivocal no. If we screw up and can’t fix it, we should fail, and other companies will continue on doing good work and servicing customers. That’s how capitalism works and the ecosystem and economy would be fine. We plan to be a wildly successful company, but if we get it wrong, that’s on us. Our CFO talked about government financing yesterday, and then later clarified her point underscoring that she could have phrased things more clearly. As mentioned above, we think that the US government should have a national strategy for its own AI infrastructure. Tyler Cowen asked me a few weeks ago about the federal government becoming the insurer of last resort for AI, in the sense of risks (like nuclear power) not about overbuild. I said “I do think the government ends up as the insurer of last resort, but I think I mean that in a different way than you mean that, and I don’t expect them to actually be writing the policies in the way that maybe they do for nuclear”. Again, this was in a totally different context than datacenter buildout, and not about bailing out a company. What we were talking about is something going catastrophically wrong—say, a rogue actor using an AI to coordinate a large-scale cyberattack that disrupts critical infrastructure—and how intentional misuse of AI could cause harm at a scale that only the government could deal with. I do not think the government should be writing insurance policies for AI companies. Third, “Why do you need to spend so much now, instead of growing more slowly?”. We are trying to build the infrastructure for a future economy powered by AI, and given everything we see on the horizon in our research program, this is the time to invest to be really scaling up our technology. Massive infrastructure projects take quite awhile to build, so we have to start now. Based on the trends we are seeing of how people are using AI and how much of it they would like to use, we believe the risk to OpenAI of not having enough computing power is more significant and more likely than the risk of having too much. Even today, we and others have to rate limit our products and not offer new features and models because we face such a severe compute constraint. In a world where AI can make important scientific breakthroughs but at the cost of tremendous amounts of computing power, we want to be ready to meet that moment. And we no longer think it’s in the distant future. Our mission requires us to do what we can to not wait many more years to apply AI to hard problems, like contributing to curing deadly diseases, and to bring the benefits of AGI to people as soon as possible. Also, we want a world of abundant and cheap AI. We expect massive demand for this technology, and for it to improve people’s lives in many ways. It is a great privilege to get to be in the arena, and to have the conviction to take a run at building infrastructure at such scale for something so important. This is the bet we are making, and given our vantage point, we feel good about it. But we of course could be wrong, and the market—not the government—will deal with it if we are.
Liam Debono retweeted
Let’s go!! GPT-5.1 reasoning incoming!!
LEAK: GPT-5-1 Thinking officially confirmed by OpenAI
Replying to @elonmusk
Wild how every time Elon drops a “40x faster” promise, half the timeline screams “impossible” and the other half starts Googling what ‘SoftMax’ even means. Meanwhile the guy already built rockets that land themselves, cars that drive without drivers, and a social platform that lets y’all doubt him… in real time… for free. Maybe the “limiting factor” ain’t the chip. Maybe it’s the people still underestimating the dude who keeps breaking industries for fun.
8
17
2
175
Liam Debono retweeted
Elon Musk: At Tesla, we basically had two different chip programs: one Dojo and one. Dojo on the training side, and then what we call AI4, it's just our inference chip The AI4 is what's currently shipping in all vehicles, and we're finalizing the design of AI5, which will be an immense jump from AI4. By some metrics, the improvement in AI5 will be 40 times better than AI4. So not 40%, 40 times This is because we work so closely at a very fine-grained level on the AI software and the AI hardware. So we know exactly where the limiting factors are. And so effectively the AI hardware and software teams are co-designing the chip Compared to the worst limitation on AI4, which is running the SoftMax operation, we currently have to run SoftMax in around 40 steps in emulation mode, whereas that'll just be done in a few steps natively in AI5 AI5 will also be able to easily handle mixed precision models, so you don't have it, it'll dynamically handle mixed precision. There's a bunch of sort of technical stuff that AI5 will do a lot better In terms of nominal raw compute, it's eight times more compute, about nine times more memory, and roughly five times more memory bandwidth But because we're addressing some core limitations in AI4, you multiply that 8x compute improvement by another 5x improvement because of optimization at a very fine-grained silicon level of things that are currently suboptimal in AI4, that's where you get the 40x improvement
Liam Debono retweeted
We're giving Pro and Max users free usage credits for Claude Code on the web. Since launching, your feedback has been invaluable for improving Claude Code. We’re temporarily adding free usage so you can more flexibly experiment with the product.
Liam Debono retweeted
Rumors are swirling that GPT-5.1 is real and currently being tested on LM Arena under various codenames. Whether it ends up being called 5.1, 5.5, or even 5o remains unclear, but something is definitely cooking at @OpenAI
Liam Debono retweeted
I’m starting system design today. Hold me accountable. On a journey to become a cracked backend developer.
Liam Debono retweeted
Day 1 of learning System Design. I decided to watch a tutorial first to see how the concept actually works before shifting to articles and documentation and I must say, clarity indeed comes from the mistakes we make. Everything seems clearer now, at least a bit. To actually build a scalable, optimizable, and reliable application, you need to design the system to ensure it aligns with user needs. I knew something was missing in my backend skills. And there’s a saying: you can’t be a good software engineer if you’re not a good developer. No company hires an unskilled developer we might as well utilize our brains judiciously
I’m starting system design today. Hold me accountable. On a journey to become a cracked backend developer.
Liam Debono retweeted
Day 14 of learning system design Today I learnt about the application layer, microservices and read related articles. The application layer is where user requests meet business logic, it’s what connects what users do on the surface to what actually happens in the backend. Then I went into microservices, which break that logic into smaller, independent services that communicate through APIs. It’s how big systems stay scalable, fault-tolerant, and easier to maintain. Understanding both matters because they form the foundation of how modern systems are structured and scaled. Still learning with the System Design Primer on GitHub Hold me accountable ❤️
Day 13 of learning system design I spent today learning about proxies , forward and reverse, and honestly, it’s wild how something this “simple” holds so much weight in how the internet actually works. A proxy basically sits between the client and the server. But the real difference lies in what it protects. A forward proxy hides the client. It’s what you use when you don’t want to expose your identity or when you’re caching repeated requests. It moves on behalf of the client. A reverse proxy hides the server. It decides which backend server should handle a request, helps with load balancing, and adds that extra layer of control and security. The more I learn, the more I realize how much of system design is just about smart layering and control, nothing overly fancy, just well-thought-out architecture. Still following the System Design Primer on GitHub. The resource is detailed and well structured if you’re serious about understanding how systems actually scale Hold me accountable ❤️
9
8
1
122
Liam Debono retweeted
As a Developer, do you feel rich?
Liam Debono retweeted
Building an app while skydiving? Say less.
Liam Debono retweeted
Best places to get a remote job: 1. LinkedIn 2. Google Jobs