Where is the map of the moon? Damn, it exists ... astrogeology.usgs.gov/search…… Soo Moon City?

Davie, FL
Joined September 2019
Naga retweeted
EdgeTAM now supported by @thedroneforge what should we do with this model and an autonomous drone?
EdgeTAM, real-time segment tracker by Meta is now in @huggingface transformers with Apache-2.0 license 🔥 > 22x faster than SAM2, processes 16 FPS on iPhone 15 Pro Max with no quantization > supports single/multiple/refined point prompting, bounding box prompts
Google was running millions of containers before Docker even existed. Before Docker, Linux cgroups were like a hidden superpower that almost nobody knew about. Google had been using cgroups extensively for years to manage their massive infrastructure, long before “containerization” became a buzzword. Cgroups - a Linux kernel feature from 2007 that could isolate processes and control resources. But almost nobody knew it existed. The problem? Cgroups were brutally complex and required deep Linux expertise to use. Then Docker arrived in 2013 and changed everything. Docker didn’t invent containers or cgroups. What it did was brilliant - it wrapped these existing Linux technologies in a simple, elegant interface that anyone could use. Suddenly, what took hours of cgroup configuration could be done with a single “docker run” command. Docker democratized container technology. It took Google-level infrastructure capabilities and put them in the hands of every developer. Sometimes the most impactful innovations aren’t about creating something new - they’re about making powerful existing technology accessible to everyone.
Naga retweeted
Hard learned advice from someone who loves open source: You can't wear open source on your sleeve when talking to investors. You tell them about revenue, growth, PMF, etc. Open source is a detail on page 67 of the term sheet.
K-scale cancels orders and refunds deposits for kbot. I thought all the VCs were excited about US-based robotics, what happen?
3
3
2
100
Naga retweeted
When people ask what does Palantir do. Going to show this image from now on.
Apple is paying $1B to Google to use a whitelabelled Gemini to power Siri While Snap gets $400M from Perplexity to allow them to build AI search for Snap in a branded way Comes to show how a large platform giving visibility / distribution to another company is worth $$$
I like this move by $SNAP. Partnering with Perplexity and giving them distribution in return for $400M a year. The wording also suggest there might be more of these kind of deals coming.. After this aftermarket move $SNAP is now one of my biggest positions.
6
3
3
122
Naga retweeted
When you email issues to Obsidian Entertainment (the video game company) their AI support hallucinates and tells you to email Obsidian (the note-taking company) instead. The perils of trusting an LLM with your customer support.
62
878
52
13,314
Laser cut 3M VHB 5952 foam adhesive is now available at SendCutSend. It’s my favorite way to stick stuff together. “3M VHB 5952 is a closed-cell acrylic foam adhesive that bonds metal, glass, painted surfaces, and plastics with durable, weather-resistant performance. Unlike mechanical fasteners or liquid adhesives, VHB creates a clean, uniform bond line that absorbs vibration and distributes stress evenly. It’s frequently used in automotive and industrial assemblies where a long-term, clean bond is needed without drilling or clamping.”
one of the coolest things i've learned @comma_ai: generally there's like 12 things causing your stuff to fail in the field, and you just have to go down the list. then your stuff is reliable. that's it. you literally just have to look at it. the comma two had a 26% failure rate, even with our fancy provisioning and stress test. the highest leverage single improvement we ever shipped? just adding a check to our fulfillment software to see if it passed the test you always think it's some super complicated bug. nope, the known bad device was just put in the "ready to ship" bin by mistake
Manufacturing engineers are not respected enough, especially the ones from top tier companies such as Apple and SpaceX. These are the girls and guys making the world work. All of a product’s problems are 100X more prevalent in production than they are in the consumer world! Companies that value their manufacturing engineers thrive, companies that think they are just another engineer on the roster suffer. Time always tells. Great companies endure time. Bad companies rise and fall. Choose your team wisely.
6
13
1
251
Naga retweeted
Most of my frustration with the state of technology boils down to this neglected warning: “Man was not meant to be farmed” VCs are trying to uncap the upside potential of early stage power laws by manufacturing a synthetic founder. They have forgotten outliers are a scarcity that cannot be synthesized through direct means. By trying to create these types they dissuade those who otherwise would have pursued unthinkable innovations. They’re attempting to manage risk of ever growing fund sizes by commoditizing founders and commercializing the creative process. Ironically this warning comes from Marc Andreesen’s techno-optimist manifesto. Oh how they’ve lost the plot. If you don’t fit into the mold, if SF doesn’t seem like your crowd, if you want to be an entrepreneur but see a rat race, don’t be dissuaded. These are just the signals that you have the mandate and should aggressively pursue technology on your own lonely path, just like greats that came before.
Naga retweeted
I'm not sure much real value was created this cycle. We have LLMs and a few apps, but the rest is an endless void of monopoly money & narratives. Attention is not all you need, it's literally another layer of debt (appearance of value) that will crush us under the money printer.
13
5
2
172
It's over, folks. Pack it in.
An M.I.T. study found that 95% of companies that had invested in A.I. tools were seeing zero return. It jibes with the emerging idea that generative A.I., “in its current incarnation, simply isn’t all it’s been cracked up to be,” @JohnCassidy writes. nyer.cm/FUZwzw8
811
952
74
17,246
This is Microsoft SandDance. originally a closed-source project that was later open-sourced. It lets you visually explore and understand data with smooth, animated transitions between multiple views.
Naga retweeted
> freeze hiring > start firing > invest in AI infrastructure > deploy "AI software engineers" > wait 3 years > ... > ... > ... > AI introduced more technical debt than a fresh mathematics PhD > software becomes more and more buggy > slowly lose customers > hire consultants to fix it > consultants use AI to fix it > still not fixed > management blames consultants > try to hire cheap juniors to fix it > no more juniors left because training programs were stopped > hire super rare and expensive seniors instead > senior recommends resetting main branch to a commit 3 years ago > AI infrastructure investments depreciated 90% > all company numbers red > seppuku
A non-obvious second effect of the enormous AI capex spend may be that it's masking the deterioration of the US economy under the surface. While the aggregate economic stats may look just fine, economy is becoming ever more K-shaped with the lower half of th economy struggling ever more in terms of keeping up with cost of living increases, young graduates struggling to find jobs, and credit card and auto loan bills pilling up. The seemingly non-negotiable enormous capex spending has also an underappreciated crowding out effect on the rest of the real economy. First, the AI capex drives up input cost for non-AI manufacturing industries from materials to labor. Indeed, manufacturing has been shrinking while data center capex surges. Secondly, if companies have to spend more on capex and expect more productivity from AI, they’ll hold back on hiring. It has already eaten into the hiring plans of even the highly profitable large cap tech companies. To rephrase @pmarca, AI is eating the world, but maybe not quite in the way we expected.
Naga retweeted
Satya just told you the entire AI trade thesis is wrong and nobody is repricing anything. Microsoft has racks of H100s collecting dust because they literally cannot plug them in. Not "won't," cannot. The power infrastructure does not exist. Which means every analyst model that's been pricing these companies on chip purchases and GPU count is fundamentally broken. You're valuing the wrong constraint. The bottleneck already moved and the market is still trading like it's 2023. This rewrites the entire capex equation. When $MSFT buys $50B of Nvidia GPUs, the Street celebrates it as "AI investment" and bids up both stocks. But if half those chips sit unpowered for 18 months, the ROI timeline collapses. Every quarter a GPU sits in a dark rack is a quarter it's not generating revenue while simultaneously depreciating in performance relative to whatever Nvidia ships next. You're paying data center construction costs and chip depreciation with zero offset. The players who actually win this are whoever locked in power purchase agreements 3-4 years ago when nobody was thinking about hundreds of megawatts for inference clusters. The hyperscalers who moved early on utility partnerships or built their own generation capacity have structural leverage that cannot be replicated on any reasonable timeframe. You can order 100,000 GPUs and get delivery in 6 months. You cannot order 500 megawatts and get it online in 6 months. That takes years of permitting, construction, grid connection, and regulatory approval. Satya's point about not wanting to overbuy one GPU generation is the second critical insight everyone is missing. Nvidia's release cycle compressed from 2+ years to basically annual. Which means a GPU purchased today has maybe 12-18 months of performance leadership before it's outdated. If you can't deploy it immediately, you're buying an asset that's already depreciating against future products before it earns anything. The gap between purchase and deployment is now expensive in a way it wasn't when Moore's Law was slower. The refresh cycle compression also means whoever can deploy fastest captures disproportionate value. If you can energize new capacity in 6 months vs 24 months, you get 18 extra months of premium inference pricing before competitors catch up. Speed to deployment is now a direct multiplier on chip purchase ROI, which means the vertically integrated players with their own power and real estate can move faster than anyone relying on third party data centers or utility hookups. What makes this really interesting is it changes the competitive moat structure completely. The old moat was model quality and algorithm improvements. The new moat is physical infrastructure and energy access. You can train a better model in 6 months. You cannot build a powered data center in 6 months. This is the kind of constraint that persists for years and creates durable separation between winners and losers.
$MSFT CEO Satya just made one of the most revealing comments of the entire AI cycle when he said Microsoft has $NVDA GPUs sitting in racks that cannot be turned on because there is not enough energy to feed them. The real constraint is not compute but power & data center space. This is exactly why access to powered data centers has become the new leverage point. If compute is easy to buy but power is hard to get, the leverage moves to whoever controls energy & infrastructure. Every new data center that $MSFT, $GOOGL, $AMZN, $META & $ORCL are trying to build needs hundreds of megawatts of steady power. Getting that energy online now takes years which means the players who locked in power early & built vertically across the stack are the ones with real control. Hyperscaler growth is no longer defined by how many GPUs they can buy but by how quickly they can energize new capacity. Satya’s other point about not wanting to overbuy one generation of GPUs matters just as much. The refresh cycle is shortening as Nvidia releases faster chips every year which means the useful life of a GPU now depends on how quickly it can be deployed into production. When power & space are delayed then that GPU loses value before it ever produces a dollar of compute revenue. Satya just validated why my DCA plan remains overweight in the AI Utility theme. The AI economy will scale at the rate power comes online, not at the rate chips improve. The next phase of AI infrastructure growth will belong to whoever can energize capacity faster than demand expands. Power has become the pricing layer of intelligence: $IREN, $CIFR, $NBIS, $APLD, $WULF, $EOSE, $CRWV
What was the game plan here Microsoft? Don’t train people… then act shocked when you choose to cut them? Because Nadella literally said it: • couldn’t scale headcount • built AI agents • then backfilled a tiny layer of humans on top The shock factor to me is this: "Microsoft CEO compared today’s AI-driven transition to earlier waves of technological change in the workplace. He recalled how, decades ago, companies shifted from sending forecasts and memos by fax to using email and Excel,  a transformation that similarly redefined efficiency and collaboration across offices." We didn't lay off in the tens of thousands when we got email and excel.... Did we? Call it a rebound if you want. The rest of us see the reset. Jobs are projects now. Get hired, already be looking for your next job. Companies planned it this way... workers should too.
Naga retweeted
Fei-Fei Li (@drfeifei) on limitations of LLMs. "There's no language out there in nature. You don't go out in nature and there's words written in the sky for you.. There is a 3D world that follows laws of physics." Language is purely generated signal.
Columbia CS Prof explains why LLMs can’t generate new scientific ideas. Bcz LLMs learn a structured “map”, Bayesian manifold, of known data and work well within it, but fail outside it. But true discovery means creating new maps, which LLMs cannot do.
Naga retweeted
The greatest way to end arguments or debates is by providing proof, so here is my proof that it costs less than half & is faster to get this specific part cast & finish machined vs CNC. Keep in mind that for ANY quantity more than 1, the contest isn't even close in terms of cost, material usage & speed. Investment casting especially today is incredible for manufacturing & the USA needs more & more of it! 🇺🇸🇺🇸🇺🇸🇺🇸 Option 1: CNC by Sendcutsend. Cost? $675.38 & arrives 8NOV25-10NOV25. Option 2: Investment Casting by Connor Kapoor. Cost? $169.50 & arrives 6NOV2025-8NOV2025. We could even add in another $150 for doing the finish machining here & it still would be $319.50.
may god have mercy on whoever has to program this part at sendcutsend
The H-1B program was never wanted or needed. It was created in 1990 based on a faulty NSF study that was never made public and falsely predicted a shortage of engineers. Scientists who testified before Congress tore it apart, and even the NSF admitted it was flawed. Yet heavy business lobbying pushed it through anyway.
My partner works for a fortune 100 company and they're pushing AI internally bigtime to try and reduce costs, but so far no one has gotten it to do more than low level research and pumping out error-filled slide decks. Meanwhile the C suite is constantly demanding their people use AI because they're hearing about how great it is on the news and seeing all this capex spend, so surely it must be useful! We are nowhere close to the tech being a viable replacement for most jobs. However that isn't stopping management from shoving it down their people's throats and ginning up a ton of drama and internal strife as worker bees scramble to explain why this tech isn't a golden goose. This is happening all over the country right now.
I’m seeing a lot of companies saying they are laying people off because of AI. Oddly I’m not seeing any of them lowering prices because of AI.
Naga retweeted
AWS activates Project Rainier: One of the world’s largest AI compute clusters comes online. ~500,000 Trainium2 chips, and Anthropic is scaling Claude to >1,000,000 chips by Dec-25, a huge jump in training and inference capacity. AWS connected multiple US data centers into one UltraCluster so Anthropic can train larger Claude models and handle longer context and heavier workloads without slowing down. Each Trn2 UltraServer links 64 Trainium2 chips through NeuronLink inside the node, and EFA networking connects those nodes across buildings, cutting latency and keeping the cluster flexible for massive scaling. Trainium2 is optimized for matrix and tensor math with HBM3 memory, giving it extremely high bandwidth so huge batches and long sequences can be processed without waiting for data transfer. The UltraServers act as powerful single compute units inside racks, while the UltraCluster spreads training across tens of thousands of these servers, using parallel processing to handle giant models efficiently. AWS says Project Rainier is its largest training platform ever, delivering >5x compute than what Anthropic used before, allowing faster model training and easier large-scale experiments. For energy use, AWS reports a 0.15 L/kWh water usage efficiency, matching 100% renewable power and adding nuclear and battery investments to keep growing while staying within its 2040 net-zero goal. --- aboutamazon. com/news/aws/aws-project-rainier-ai-trainium-chips-compute-cluster
About a year ago, this site near South Bend, Indiana was just cornfields. Today, it’s 1 of our U.S. data centers powering Project Rainier – one of the world’s largest AI compute clusters, built in collaboration with @AnthropicAI. It is 70% larger than any AI computing platform in #AWS history, with nearly 500K Trainium2 chips, and is now fully operational with Anthropic actively using it to train and run inference for its industry-leading AI model, Claude (providing 5X+ the compute they used to train their previous AI models). We expect Claude to be on more than 1 million Trainium2 chips by the end of year. Will help enable the next generation of AI innovation as we further extend our infrastructure leadership. aboutamazon.com/news/aws/aws…