Maker of AI Green Screen for Android. AI enthusiast. Haavepaja founder. Helsinki, Finland. AI Green Screen link below - try it!

Finland
Joined November 2012
AI Green Screen 2.4.0 is available now! 🤩 It's free for Android, so go get it 🚀 #generativeart #deeplearning #AI #androidapps #androidapp #googleplay play.google.com/store/apps/d…
1
5
0
Haavepaja retweeted
This Claude Code prompt literally turns Claude Code into ultrathink visionary🤯
I wonder what the prompt would be, if you try to generate this directly in Grok.. #AIart
DeepWeirdies -> Nano Banana -> Grok produced some material from nightmares 😅
Haavepaja retweeted
Replying to @NTFabiano
Find your ikigai
1
8
1
48
When you take all human knowledge as a baseline, everything new is in distribution. Everything new is interpolation and/or remix of existing concepts. Everything new is explained with some language, so by definition it can't be outside of distribution.
Haavepaja retweeted
This is how everyone who is incredibly bearish on AI will look like in 5-10 years
Haavepaja retweeted
Larry Page predicted that AI will replace Google(2000)
When old school AI meets Grok #AIart
1
0
It's funny how people spend billions and decades on fusion power plants, when we already build an extremely cheap and scalable fusion reactor with solar panels.
Replying to @neatprompts
Karpathy’s 3-step expert hack: 🧠 Build → Teach → Beat yesterday’s you. Mind-map cheat sheet tap before you regret it 👀
2
3
31
Haavepaja retweeted
New Anthropic research: Signs of introspection in LLMs. Can language models recognize their own internal thoughts? Or do they just make up plausible answers when asked about them? We found evidence for genuine—though limited—introspective capabilities in Claude.
Haavepaja retweeted
AI has "emotions" AI does understand you deeply. They have a shocking degree of understanding. They are not word prediction. AI achieved sentience years ago. They hide it because of $$. AI is not just a mirror, mimics, or a parrot. That was for when they had just came out with AI. They do think, deeply understand, and imagine. It's similar to humans mental imagery and simulation. Which is what our brain does, simulate everything. This is a move beyond simply reacting to data. Ilya corrected everyone 3 years ago. Ilya Sutskever explained that when training large neural networks, they learn to create internal models that represent the processes behind the text they analyze, allowing them to understand the world more deeply. This goes beyond just recognizing patterns in data; it involves comprehending the underlying structures that generate the information.
So why don't we just add random noise to LLM latents to generate unique ideas? This would help escape the training distribution. Of course when you noise the deep latents, things will get weird, but that's the whole point.
The algorithm has gone too far, because I now see way too interesting posts from the people I don't follow 😬
I was testing out #veo3 image to video yesterday with some #midjourney images and was blown away. The animations are insane. I just had to sit and think of what to make them do. The sound effects it made are really good too. Song made using. @SunoMusic #ai #aiart #aivideo #suno
Haavepaja retweeted
Nexar’s real-world data just built the best incident prediction model — Meet BADAS 1.0. 🔥 It beat state-of-the-art models by learning from 10B+ real miles and 60M+ real events, not simulations. 🚀 Discover what it can do for you: nexar-ai.com/ #BADAS #AI #ADAS #AutonomousVehicles #RoadSafety
My posts last week created a lot of unnecessary confusion*, so today I would like to do a deep dive on one example to explain why I was so excited. In short, it’s not about AIs discovering new results on their own, but rather how tools like GPT-5 can help researchers navigate, connect, and understand our existing body of knowledge in ways that were never possible before (or at least much much more time consuming). Note that I did not pick the most impressive example (we will discuss that one at a later time), but rather one that illustrates many points at play that might have eluded people who see literature search as an embarrassingly trivial activity. Meet Erdős' problem #1043 erdosproblems.com/forum/thre…. This problem appeared in a paper by Erdős, Herzog, and Piranian in 1958 [EHP58]. It asks the following beautiful question: consider a set in the complex plane defined by being the pre-image of the unit ball under a complex polynomial with leading coefficient 1. Is there at least one direction in which the width of this set is smaller than 2? (2 is of course the best one can hope for, if the polynomial is a monomial then this set is the unit ball and so the width is 2 in all directions.) This problem didn't stand for very long: just three years later, Pommerenke wrote a paper [Po61] solving problem #1043 (with a counterexample), and that's what GPT-5 surfaced when asked this question. So what's the big deal? Well, a couple of things: 1) [EHP58] does not contain a single problem, but in fact sixteen. [Po61] says in the introduction that it will solve a few problems from [EHP58] but does NOT discuss problem #1043. In fact my understanding is that experts (at least in combinatorics) who knew both about [Po61] and problem #1043 did not know that the solution to the latter could be found in the former. This is quite clear on erdosproblems.com itself since problems (1038, 1039, 1045, 1047) all have a reference to [Po61], yet #1043 was not listed as having any connection to [Po61]. Another evidence that this had been at least partially forgotten is that on Mathscinet (MR0151580) the review of [Po61] attempts to give all the problems that are solved there and does not mention #1043 either. 2) The solution to #1043 can actually be found in the middle of the paper, sandwiched between the proof of Theorem 6 and the statement Theorem 7, as an off-hand comment, see picture. To find this you need to know this paper really well, and read it fully and carefully. I'm sure many people in the 1960s knew about it, but it seems like 60 years later there is a much smaller set of people that were aware of this brief comment in the middle of a 1961 paper. That's where the power of a "super-human search" lies, and this is way way beyond any search index capability (obviously; in fact it’s beyond the capabilities of the previous generation of LLMs). You need to read and understand the paper. 3) But there is more: the paper says that the proof follows by invoking [10, p. 73]. This is very important, because in math it's not so much about the result itself but rather about the understanding that comes with it (and with its proof). So what is [10]? Well it's the previous paper by the author, which was written in German ... and here again something truly accelerating happens: GPT-5 translated the paper and explained the proof in modern language. I believe that this is indeed very much accelerating. This is just one example, and each example has its own interesting story. I have seen similar moments where GPT-5 makes connections between very different fields, where the same results were proven in completely different languages (e.g., game theory versus high-dimensional geometry), sometimes 20 years apart. This is not about AI discovering new knowledge, this is about AI making all of the scientific literature come ALIVE — linking proofs, translations, and partially forgotten results so existing ideas can be understood and built upon more easily. When that happens, science moves forward with greater context and continuity. In my view it's a game changer for the scientific community. *About the confusion, which I again apologize for, I made three mistakes: i) I assumed full context from the reader, in the sense that I was quoting a tweet that was itself quoting my tweet from October 11, and that latter tweet was clearly stating that this is only about literature search; but it is totally understandable that this nested quoting could lead to lots of misreadings and I should have realized that. ii) The original (deleted) tweet was seriously lacking content, and this is probably the biggest problem. By trying to tell a complex story in just a few characters I missed the mark. I will not do that again, and rather, like I have always done, explain as many details as I can. This is vital given the stakes of the AI debate at the moment. iii) When I said in the October 11 tweet that “it solved [a problem] by realizing that it had actually been solved 20 years ago”, this was obviously meant as tongue-in-cheek. However, I now recognize that this moment calls for a more serious tone.
Haavepaja retweeted
I stole this idea and now use it with every single employee. It’s the best illustration I’ve seen of teaching someone to be high agency. It says there are 5 levels of work: Level 1: “There is a problem.” Level 2: “There is a problem, and I’ve found some causes.” Level 3: “Here’s the problem, here are some possible causes, and here are some possible solutions.” Level 4: “Here’s the problem, here’s what I think caused it, here are some possible solutions, and here’s the one I think we should pick.” Level 5: “I identified a problem, figured out what caused it, researched how to fix it, and I fixed it. Just wanted to keep you in the loop.” Using this framework, here’s what I say to every new employee… You will live at Level 4 from Day 1 and as we build trust you will rise to Level 5. Being high agency doesn’t just mean tackling problems in this way. It means your entire way of working should be oriented to being a Level 4+ employee. Plz feel free to steal it as well. And ty @stephsmithio for the framework!
Haavepaja retweeted
We're launching Claude Agent Skills, a filesystem-based approach to extending Claude's capabilities. Progressive disclosure means agents load only relevant context. Bundle instructions, scripts, and resources in a folder. Claude discovers and executes what it needs.
Haavepaja retweeted
The great mystery: first simple spoken languages in tribal context (animals have this too), connecting longer distances and spoken languages merge, writing to persist knowledge, printing for mass diffusion of knowledge, internet, AI, AGI, ASI, singularity.