musician, programmer and card-carrying autist, trying to answer questions about digital art (got any?) currently reading: Gideon the Ninth by Tamsyn Muir

V 1.0
Joined February 2022
Carlyle and the utopian dream retweeted
added gifs and pngs to game description on steam check it out wishlist kiss me on the cheek
1
2
24
Carlyle and the utopian dream retweeted
👁️‍🗨️👁️‍🗨️
4
325
5
3,486
Carlyle and the utopian dream retweeted
One thing that complicates calling conventions is the idea of callee-save registers (that the caller can pretend that some values stay resident in particular registers across function calls). The main idea that motivated this (as far as I can guess) is hey, if you are calling small functions, maybe they don't need to use all the registers, so you get an efficiency gain if those functions only use the registers that are considered destroyed... nobody has to save-and-restore them and everyone wins. But in 2025, small functions always get inlined, so this doesn't make sense any more. Because non-inlined called functions are always big, they always want to use all the registers. So any time you call a function you should just consider all registers erased. (Unless you want to do a very specialized function tagging scheme where functions give you a bitfield about which registers they use, and maybe you can save some of the SIMD registers across some set of calls or something). This would drastically simplify calling conventions, because nothing gets passed in registers. It also has consequences for register allocation algorithms and code motion: you want to group calls together to the greatest extent possible, and between calls, you know nothing is retained, so there are clear windows in which you allocate that are smaller than what people try to do today. That all sounds good but the crux is: do you lose performance doing this, for some reason, even though called functions are all big in 2025? Anyone have notes here?
14
7
291
Carlyle and the utopian dream retweeted
This video shows some interesting and neat visualizations of lambda calculus operations. By the end this will help convince you you should never ever base anything practical on lambda calculus (looking at you, Martian Computing): piped.video/watch?v=RcVA8Nj6…
11
7
1
254
Carlyle and the utopian dream retweeted
Which of these Art Nouveau doors from the 1900s is your favorite?
Carlyle and the utopian dream retweeted
After 2 years, we're almost done our next game: The Travelers is a text-based adventure game and MMO. Everyone plays in the same massive wasteland, working to uncover a hidden secret or stopping others from finding it first. Wishlist on Steam! store.steampowered.com/app/3…
4
13
119
0
Carlyle and the utopian dream retweeted
Observe:
4
6
127
Carlyle and the utopian dream retweeted
if your typeface doesn't have lowercase numbers I'm NOT USING IT
Carlyle and the utopian dream retweeted
285
693
198
19,024
Carlyle and the utopian dream retweeted
I got a bit colorful in this reply, but let me describe why this drives me so crazy, especially now in 2025: Now someone comes along to make their web app, and they are using React or something. So now we have *at minimum* 4 layers of wrapping: React -> browser API -> SDL -> OS API. Each layer adds inefficiency and bugs and wastes a large amount of programmer time. This is just on *one browser*. Every single other web browser has to run the same React app in the same way, with some *different* stack of 4+ wrappers all acting on each other. What is the goal of all these browsers doing all these heterogeneous things that sucks up all these programmer life years? Actually the goal is to ***provide no new functionality***, because not only does a browser API not provide anything that was not in the pre-browser APIs, they all have to be compatible with each other or web pages won't work. Meanwhile every other system, that is not a browser, that also wants to provide a consistent API to its users, has to do the same thing, but different, because the provided APIs are different. So if we are putting this huge amount of effort in to do all this redundant heterogeneous programming to solve the same problem dozens or hundreds of times, it must really be worth it, right? Like, we must be solving some super serious difficult rocket science problem? No, keyboard and mouse input is very close to the simplest possible API. You have information about each event. That information is a small set of enums and flags. But everyone has trivially different numbers for their enums and flags because everyone wants trivially different numbers for their enums and flags. (If you get really crazy, you might want to identify which device an event came from, but almost no programmers do this). So we do this immense amount of programming effort, slowness at runtime, bugs to users, etc, in order to map trivially different numbers back and forth. If we are going to screw around forever making mess after mess for such a simple thing, we don't deserve to be able to do complex and nuanced things. And so it has come to be: we almost can't do complex and nuanced things at all, any more. But even if we could, we wouldn't have the time. Multiply this by 1000 different systems, and you get today's software. I feel programmers in 2025 are just very low-IQ compared to how programmers are "supposed" to be. It's like there's a very simple maze on a piece of paper in front of us but we just can't figure out the line to draw through the maze. So we get out a crayon and scribble (incidentally making the maze bigger) but we have a big dumb idiot smile for a few minutes because we contributed to open source!!!! (Followed by years of painful toothache as we deal with the things we actually typed). [BTW, jai adds to this problem -- we have our own set of trivially different numbers and flags. I hate it and I think it is stupid, but because I am spending my time and energy making a programming language, engine, and game, I don't have the bandwidth to solve the actual problem here, the solution to which seems to be political to at least a substantial degree.] x.com/Jonathan_Blow/status/1…
Carlyle and the utopian dream retweeted
API design is my passion
20
129
7
2,857
Carlyle and the utopian dream retweeted
I once argued (and lost) for unary const: int x = 1; if(something()) x = 3 + foo(); const x; x = 42; // error, x was made const
if-else vs ? : Replace: int x; if (condition) x = 7; else x = abc(); with: int x = condition ? 7 : abc(); Aside from being shorter, this is better because `x` is only assigned a value once. If it isn't assigned again, later on, it can also be made `const`: const int x = condition ? 7 : abc(); making your code easier to read and understand.
4
2
16
Carlyle and the utopian dream retweeted
oh so it's an impression because it's the most used engine? whew good to know, I thought it was because it's an engine designed around the assumption that you can call CreatePipelineState() mid-frame and have it return in < 16 or 33 ms, silly me!
What is interesting here is that a broken/unoptimized release is damaging the engine reputation. Since the issue is widespread (it is very natural to develop that way) - the most used engines will be perceived as the most problematic/slow/buggy/etc.
Carlyle and the utopian dream retweeted
So, reminder: the quality of code output by these systems is *very low* and the AIs themselves don't understand the output. This is obvious to anyone who knows how to program. There are still use cases, for example, to output a large volume of low-quality code that is not intended to be used for serious purposes and is not expected to have a long lifetime. Anyone extoling the virtues of AI code generation, who actually knows how to program, will do so with the context of the above paragraph in mind. Anyone running around saying "the AI just generates all this code and it's great!!" either: (a) Does not understand code and should not be trusted with making decisions about code; (b) If representing themselves as people who understand code, are obvious frauds, or else have so much Dunning-Kruger they don't know how bad they are. This may improve in the future. I would love it if I could have an AI help me write complex programs more quickly. But it's just not the state of things today, and anyone claiming it is, is either lying or being fooled. x.com/t_blom/status/19774369…
Carlyle and the utopian dream retweeted
Recently, there was a clash between the popular @FFmpeg project, a low-level multimedia library found everywhere… and Google. A Google AI agent found a bug in FFmpeg. FFmpeg is a far-ranging library, supporting niche multimedia files, often through reverse-engineering. It is entirely the result of volunteers and a marvellous piece of technology. For people who have never been on the receiving end of ‘security researchers’, it is difficult to understand why there is a pushback against them. Think about the commons. In Quebec, these are pieces of land where farmers send their cows during the summer. It is collectively owned, like FFmpeg. Everyone is responsible to care for the commons if they are using it. If you are not using it, you are supposed to stay away. Now, imagine a rich corporation comes in and sends its well-paid agents into the commons to find issues with it. Maybe a broken barrier or a dangerous hole. So far so good… But instead of fixing the issues, the corporation says “you have a month to fix the issue or else I will report you to the government”. How much love would the big corporation get in this context? Why do the security researchers insist on disclosing the issue without having contributed to fixing it? So that they can get credit for it. That's their entire scheme: find issues, irrespective of whether they affect the use case of their employer... after all, all issues no matter how small can be potentially significant at some point... and then brag about it without doing the hard work of trying to fix it. Let me be clear that no everyone working in security behaves this way. Many are good actors. But there are enough 'security researchers' behaving as parasites that it has become a recognizable pattern. « But Daniel, who should be fixing the bugs then? » If you are paying for commercial support, then get in touch with the folks you are paying. If you are not paying, then it is on you. It says so in the licenses. It is part of the moral code open source. It is part of the legal framework. Let me be clear. You do not get to bite back at Linus Torvalds if a bug in the linux kernel crashes your server. What you do is that you identify the issue, narrow it down and propose a fix. If you cannot do it, then you pay someone to do it. Or you just do not use Linux.
58
303
25
2,379
Carlyle and the utopian dream retweeted
Linux desktop does still have some user-friendliness issues, as it turns out
181
63
37
1,973
Carlyle and the utopian dream retweeted
TIL Ampersand "&" is just a sloppy handwriting of Latin word "et" which means "and".
57
128
20
2,173
Carlyle and the utopian dream retweeted
The fall of "When prophecies fail": Another social psychology classic turns out to be based on fabrications and lies. In 1954, Dorothy Martin predicted an apocalyptic flood and promised her followers rescue by flying saucers. In “When Prophecy Fails “ (1956), the now-canonical account of the event, Festinger, Riecken and Schachter claimed that the group doubled down on its beliefs and began recruiting—evidence, the authors argued, of a new psychological mechanism, cognitive dissonance. When Prophecy Fails is one of the most influential case studies in 20th-century social science. It shaped popular understandings of how belief survives disconfirmation, and became a touchstone for explaining the origins of religious movements... But the case was misrepresented. The cult did not persist, proselytize, or reinterpret its failure as a spiritual triumph. Its leader recanted, the group disbanded, and belief dissolved. Drawing on newly unsealed archival material, this article demonstrates that the book's central claims are false, and that the authors knew they were false. The documents reveal that the group actively proselytized well before the prophecy failed and quickly abandoned their beliefs afterward. They also expose serious ethical violations by the researchers. The newly unsealed Box 4 of papers contain transcripts, telephone logs, research notes, channeled messages, and internal communications among the researchers. Collectively, they reveal serious ethical breaches: fabrications, covert manipulation, and at least one instance of interference with a child welfare investigation. One coauthor, Henry Riecken, posed as a spiritual authority and later admitted he had “precipitated” the climactic events of the study. This article shows that the authors of When Prophecy Fails misled their readers—and that scholars in psychology, sociology, and religious studies have been building theories atop a collapsed foundation. The full scope and variety of the misrepresentations and misconduct of the researchers needed the unsealed archives of Festinger to emerge, the full story could not be written until now. Every major claim of the book is false, and the researchers’ notes leave no option but to conclude the misrepresentations were intentional.
Carlyle and the utopian dream retweeted
I have mysterious eerie music playing in my house all the time. I'm terrified of my fridge
1
29
3
132