投资、美股、加密货币、百万财富梦、财富自由

Joined May 2021
金沙枣 retweeted
$XYZ – Buybacks imo, we should be hitting $200M+ EVERY MONTH until $100. Stock is too cheap!! I don't understand why @AmritaAhuja slowed down in June & July 😤. Deploy the $1.1B remaining ASAP and let's announce another one.
4
6
36
金沙枣 retweeted
The average Robinhood user has $12K in their account. The average Charles Schwab client? $305K. $HOOD $SCHW
金沙枣 retweeted
🇫🇷 CPU Retail Sales Amazon FR - October 2025 Total AMD domination also in France ℹ️ Units AMD: 5,650 units (~78.5%) Intel: 1,550 (~21.5%) ℹ️ Revenue AMD: €1,456,789.00 (~84.9%) Intel: €617,211 (25%) ℹ️ ASP AMD: €276 Intel: €343
2
3
1
17
金沙枣 retweeted
Grok already groks memes better than most humans. Think about that …
One of the best things about Grok is it can explains memes. 😂
金沙枣 retweeted
The $INTC x $AMD Partnership Thesis🤝 Why it makes sense: With $TSM strained by Jensen demanding more capacity during his current Taiwan tour, $AMD's rumored talks with Intel Foundry Services (IFS) could secure domestic-priority production (in the US) for its OpenAI deal, benefiting both competitors amid AI demand. Why it can happen: - TSM's Capacity Crunch: Customers like $NVDA refuse to stop begging for capacity, because they simply cannot (stop begging). Demand is high. $TSLA is gaining priority on 3nm wafers, as well, leaving AMD short. TSMC's ramping to 160K/month for Blackwell. IFS offers the alternative US fabs because TSMC's limited capacity prioritizes clients like Nvidia and Tesla, risking AMD delays. Shifting to Intel reduces reliance, ensures timely AI chip supply, and hedges against Taiwan geopolitical risks. - OpenAI Partnership Drive: The deal last month has AMD supplying 6GW Instinct GPUs, with OpenAI's option for 10% stake and prioritizing chips. Fulfills ridiculous demand where TSM can't because OpenAI's massive compute needs require guaranteed volume. IFS partnership gives AMD dedicated capacity, honors prioritization commitments, and scales production without competing for TSM slots. - US Semi Advocacy: This past week, Sam Altman went on a tirade, pushing for more fully American fabs via CHIPS Act expansion. Only Intel provides fully domestic capacity, aligning with OpenAI's goals with Altman advocating for the American Semiconductor supply chain to avoid foreign dependencies. This partnership advances that vision, secures supply chains, and leverages federal funding for faster AI innovation. - Competitor Collaboration: There were rumors from last month of early IFS talks for AMD chips. Like $AAPL x Samsung (for iPhone displays), this boosts IFS utilization and AMD's supply security because rivals can collaborate for mutual benefit. Intel fills underused fabs for revenue, AMD gains reliable output amid shortages, fostering industry growth in AI without sacrificing competition. The rumors were not just rumors, the way I see it. Discussions are happening. The deal will happen, in some capacity. 🚀 Bookmark this.
3
7
1
60
金沙枣 retweeted
Meta $META spent 37% of its revenue on capex in Q3, its highest on record and above the 34% of revenue it spent at the height of its metaverse spending in 2022. $NVDA $AMD $AVGO
金沙枣 retweeted
BREAKING $AMD EPYC & Mahindra 40% saving🚀 @AMD EPYC processors help Mahindra achieve 40% cost savings Challenging the perception that high-performance computing (HPC) in the cloud is costly, Mahindra has modernized its HPC infrastructure by adopting AMD EPYC processor-powered virtual machines on Google Cloud. This move enhances compute efficiency across Mahindra’s HPC workloads, enabling Mahindra to drive new revenue streams while boosting efficiency and reducing total cost of ownership (TCO). Mahindra built an on-demand HPC platform to support compute-intensive workloads such as product design and virtual product simulation activities. During the digital-first launch of the Thar ROXX five-door SUV, Mahindra processed approximately 200,000 vehicle reservations online in just 1.5 hours – a feat not possible through traditional showroom-based rollouts. AMD EPYC processors deliver the performance, memory bandwidth, and core density needed to run these workloads at scale in the cloud – achieving significant infrastructure and software licensing cost savings. Key highlights include: ·  Cost Optimization: Achieved approximately 40 percent overall savings. ·  Performance and Efficiency: Improved performance per core while reducing licensing costs. ·  Scalable Platform: Infrastructure supports HPC, design, and enterprise workloads. ·  Faster Innovation: Enabled faster design processing and more responsive digital experiences. "By moving our workloads to AMD EPYC CPU-based virtual machines, we have seen greater application performance and up to 40 percent cost savings. The performance and efficiency of AMD EPYC processors have strengthened our business case and accelerated our IT transformation," said Abhishek Sukhwal, Head of Infrastructure, Mahindra Group. "AMD EPYC processors are engineered to help enterprises like Mahindra accelerate innovation, whether on-prem or in the cloud," said Vinay Sinha, Managing Director – India Sales, AMD. "By delivering a strong combination of performance, core density, and energy efficiency, EPYC processors empower organizations to scale compute-intensive workloads while optimizing TCO."
$AMD, Top wall street Analyst $350 PT🚀🚀🚀 C.J. Muse from Cantor Fitzgerald is the most bullish analyst on $AMD, maintaining an Overweight (Buy equivalent) rating with a price target of $350. This target, reiterated on November 5, 2025, following AMD's Q3 earnings beat and raised guidance, implies about 40% upside from the stock's recent close around $250. Muse's commentary, drawn from his post-earnings research note, emphasizes $AMD's accelerating momentum in AI-driven data center products as a core growth engine, while acknowledging near-term headwinds in client and embedded segments. He views the company's execution as "flawless" in a "seemingly insatiable" AI compute demand environment, positioning $AMD to capture significant share from Nvidia in GPUs and Intel in CPUs. Below is a detailed breakdown of his analysis, structured by key themes. ~Muse credits the beat to "stronger-than-expected AI GPU adoption" and raised Q4 guidance (EPS $1.05–$1.15, revenue $7.3–$7.7 billion), implying full-year 2025 data center growth of 80%+. ~Key Quote: "AMD's data center momentum is sustainable and accelerating, with AI infrastructure demand outpacing supply into 2026." ~Muse projects $6.5 billion in AI GPU revenue for 2025, a 150%+ increase YoY, fueled by the MI300 series and upcoming MI350 (ramping mid-2025). He sees GPUs eclipsing CPU sales by H2 2026, with total data center GPU opportunity at $25–50 billion annually (5–10% share of a $500 billion TAM). ~He notes @AMD 's 35% server CPU market share (vs. Intel's 65%) is expanding due to Zen 5 architecture advantages in power efficiency and core density, winning bids from AWS, Google Cloud, and Meta. ~Muse praises CEO Lisa Su's "disciplined" approach, with net cash position of $5 billion providing a "cushion" against valuation multiples (trading at 45x forward EPS vs. Nvidia's 60x). ~ At $350 target, $AMD trades at 50x 2026 EPS, justified by 25%+ CAGR in data center revenue. Muse sees "limited downside" to $220 if AI hype cools, but upside to $400+ on MI400 cycle wins. Overall, Muse's thesis is unapologetically bullish: $AMD is "the No. 2 AI player with No. 1 potential," leveraging a $500B+ TAM expansion. His track record (75% hit rate on targets) and focus on execution over hype make this a standout call. Investors should watch Analyst Day for 2026 guidance, which could propel shares toward $300 by year-end.
1
18
1
163
金沙枣 retweeted
Strategy has acquired 487 BTC for ~$49.9 million at ~$102,557 per bitcoin and has achieved BTC Yield of 26.1% YTD 2025. As of 11/9/2025, we hodl 641,692 $BTC acquired for ~$47.54 billion at ~$74,079 per bitcoin. $MSTR $STRC $STRD $STRE $STRF $STRK strategy.com/press/strategy-…
金沙枣 retweeted
$AMD is the second best performing position over the last 10 Years in the S&P 500 This was the most hated position on X in April at $82.
The best performing stocks in the S&P 500 over the last 5, 10, 15, and 20 years... piped.video/channel/UCRoWRnX…
31
122
8
1,148
@elonmusk remarks at the latest Tesla shareholder meeting validate my previous analysis and predictions regarding his semiconductor strategy. The following brief analysis will focus on why Musk wants Tesla to build its own chip production plants. 1. Musk said, “So I’m hopeful that we can within less than a year of AI5 starting production, we can actually transition in the same fab to AI6 and double all of the performance metrics.” I previously projected that AI6 would enter mass production in 2027. Many questioned whether only a one-year gap between AI5 and AI6 was realistic, but Musk’s comments validate that timeline (setting aside whether the plan ultimately executes as stated). 2. Musk wants Tesla to build its own chip production plants. This also validates my earlier view that a key reason for shifting AI6 orders to Samsung was to gain real-world foundry experience at an exceptionally low cost. It’s clear Musk expects that experience to help Tesla stand up chip production plants of its own. Musk is worried about future chip supply, hence his comment: “But even when we extrapolate the best case scenario for chip production from our suppliers, it’s still not enough.” However, I don’t think chip supply is the core issue—at least not at TSMC. TSMC is unlikely to be the primary bottleneck. TSMC CEO C.C. Wei has told Musk before, “If you’re willing to pay, there will be chips.” While TSMC is known for its cautious approach to capacity expansion and often runs tight when demand is strong, history shows it has rarely been the primary bottleneck in the supply chain. When more information emerges, I’ll provide a tighter quantitative read on TeraFab’s supply–demand balance. For now, I’ll leave aside whether Tesla can successfully build its own chip production plant and focus on why Musk still wants a Tesla chip production plant even if TSMC’s supply isn’t the constraint: The first factor is geopolitics. Musk has publicly voiced concerns about the concentration of advanced node capacity in Taiwan. Even by 2030, TSMC’s advanced-node and advanced-packaging capacity in the U.S. will remain limited—especially on the packaging side, likely no more than about 10% of its global capacity. I believe Musk realizes this as well, which is why he said, 'And then you go to solve memory and packaging too.' The second factor is R&D support and production flexibility. It's expected that Tesla has significant growth potential, but within TSMC today it is still a second-tier customer relative to the likes of Apple and Nvidia. That naturally means less priority on R&D support and production flexibility—one of the main reasons Tesla moved AI6 to Samsung. The ultimate remedy is to control its own chip production plant. Finally, there's the integration advantage. Musk's new technologies and products are often at the cutting edge. The ability to customize key design and manufacturing segments to his specific needs—with chip production being a critical one—enables a highly integrated final product and maximizes the benefits of vertical integration. References: 1. Tesla's 2025 Annual Shareholder Meeting Transcript singjupost.com/teslas-2025-a… 2. TSMC CEO C.C. Wei told Elon Musk, "If you’re willing to pay, there will be chips." money.udn.com/money/story/11… 3. Musk has publicly voiced concerns about the concentration of advanced node capacity in Taiwan. eetimes.com/musk-says-chip-c…
For Elon Musk and Tesla, this represents a valuable opportunity to gain real-world foundry experience at an exceptionally low cost — something TSMC would never allow. It enables Tesla to enhance its chip design capabilities, particularly in manufacturability, while also gaining deeper manufacturing knowledge, which will give them more leverage in future negotiations with foundries. In the long run, Elon Musk's businesses will only demand more advanced chips, so acquiring core manufacturing expertise becomes a strategic advantage. Tesla’s AI6 chip is currently scheduled for mass production in 2027, using Samsung’s 2nm node (SF2). SF2 currently has a yield of 40–45%, lower than TSMC’s N2 (over 70%) and Intel’s 18A (50–55%). Elon Musk's execution is proven, and SF2's adoption of the same GAA technology as SF3 should facilitate mass production. Even so, it's still difficult to predict whether Samsung can successfully ramp AI6 on SF2 as scheduled. If production falls short of expectations, the worst-case scenario for Tesla would be to shift the order back to TSMC and absorb the resulting delays to AI6.. However, Tesla’s edge in real-world AI could significantly reduce the risk of AI6 delays. Regardless, Tesla still gains from enhanced design capabilities and deeper chip manufacturing know-how. As for Samsung — with nothing to lose, why not give it a try? This partnership presents manageable downside and strong upside potential for both sides. If AI6 reaches mass production smoothly, chip design and manufacturing could become a core competitive advantage across Elon Musk’s businesses — enabling greater flexibility and lower costs. While Samsung may not fully catch up with TSMC in advanced nodes, it has at least discovered a new business model that actively involves customers in the manufacturing process.
9
43
9
268
金沙枣 retweeted
CRASH COMING: Why I am buying not selling. My target price for Gold is $27k. I got this price from friend Jim Rickards….and I own two goldmines. I began buying gold in 1971….the year Nixon took gold from the US Dollar. Nixon violated Greshams Law, which states “When fake money enters the system….real money goes into hiding.” My target price for Bitcoin is $250 k in 2026. Silver $100 in 2026. I own silver mines and I know new silver is scarce. Ethereum $60. I got this from Tom Lee. Ethereum is block chain for Stable coins. This means Ethereum follows METCALFS law…. The law of NET WORKS. LESSON: I follow the laws of money, Gresham and Metcalf’s laws. Unfortunately the US Treasury and Fed break the laws. They print fake money to pay their bills. If you and I did what the Fed and Treasury are doing…. We would be in jail for breaking the laws. Today the USA is the biggest debtor nation in history and why I have been warning “Savers are losers.” That is why I keep buying gold, silver, Bitcoin, and Ethereum even when they crash. Take care. Massive riches ahead.
金沙枣 retweeted
$AMD's Data Center business brought in $4.3 Billion of Revenue this past quarter up from $610M in Q1 2021
金沙枣 retweeted
Join us on Nov. 11 at 1 p.m. ET for AMD Financial Analyst Day, live from NYC. Hear how we’re driving the. future of high-performance and adaptive computing across AI, data center, client, gaming, and embedded markets. Register to watch the livestream: event.webcasts.com/starthere…
8
52
2
287
金沙枣 retweeted
It sure is a tall order 😂 Anyone can buy Tesla stock right now and come along for the ride. There will inevitably be some bumps along the way, but, with a truly immense amount of work, I think these goals can be accomplished.
The final boss in Musk's 2025 Comp Plan is achieving $400B in EBITDA. Perspective: $238.24 billion: Saudi Arabian Oil Company (Saudi Aramco) achieved the highest annual EBITDA on record in 2022, driven by elevated global oil prices, higher sales volumes, and strong refining margins amid geopolitical tensions and post-pandemic demand recovery. This figure surpasses other major companies like ExxonMobil ($102.59 billion in 2022) and NVIDIA ($34.48 billion in fiscal 2024). For context, Aramco's 2023 EBITDA was approximately $236.81 billion, still leading but lower than 2022's peak. Let that sink in.
金沙枣 retweeted
TSMC $TSM is rumored to be considering four-year consecutive price hike for its <5nm nodes, with average price hikes of 3% to 5% per year. $NVDA $AMD $AVGO $AAPL
金沙枣 retweeted
$AMD Bears aren't ready for 2026 3 digits growth🔥 @AMD FY 2026 Base case: $70B assuming 15% AI market share Bear case: $60B+ assuming 10% AI market share Bull case: $70B-$100B assuming 25% AI market share Read the full thread below⤵️ Not Financial Advice!
$AMD MI450 vs $NVDA Rubin Comprehensive 🧵 The @AMD Instinct MI450 (part of the MI400 series) and @nvidia 's Rubin architecture (expected flagship like the R200 or VR200) represent the next wave of AI accelerators, both slated for production and deployments in 2026. These chips target hyperscale AI training and inference, with AMD emphasizing memory capacity and rack-scale integration to challenge NVIDIA's ecosystem dominance. Both use HBM4 memory and advanced packaging, but MI450 leverages a superior process node for density. Performance metrics are peak theoretical (FP4 for AI inference/training); real-world varies by workload. Architecture AMD: CDNA 5 (UDNA-based) NVDA: Rubin (successor to Blackwell) Process Node AMD: TSMC 2nm NVDA: TSMC 3nm Memory AMD: 432 GB HBM4 NVDA: 288 GB HBM4 Memory Bandwidth AMD: 9.6 TB/s NVDA: 20 TB/s (enhanced post-MI450 reveal) Max Compute FP4 vs FP8 AMD: ~40 PFLOPS(FP4) & ~20 PFLOPS(FP8) NVDA: ~50 PFLOPS(FP4) & ~25 PFLOPS(FP8) Power Consumption AMD: 1000-1400W NVDA: 1,800-2,300W Rack-Scale Solution AMD: Helios (72 GPUs; 31 TB HBM, 1.4 PB/s total) NVDA: NVL144 (144 GPUs; liquid-cooled, Vera CPU Release Date: Both in H2 2026 Estimated Price: AMD MI450: $30k-$40k(large scale discount) NVDA Rubin: $45-$60k(Minimal discount seen so far) Total Cost of Ownership (TCO) for AI accelerators like the MI450 and Rubin encompasses not just the upfront hardware price but also ongoing expenses such as energy bills, cooling infrastructure, maintenance, and scalability over 3-5 years in data centers. AMD's MI450 delivers a significantly lower TCO estimated at 20-40% less than Rubin's in inference-heavy workloads primarily due to a combination of pricing aggression, superior energy efficiency, and optimized rack-scale designs that minimize infrastructure upgrades. ~Lower Acquisition Costs: AMD GPUs are typically priced 25-35% below NVIDIA equivalents, with MI450 units projected at $30K-40K versus Rubin's $45K-60K. This stems from AMD's fabless model leveraging TSMC without NVIDIA's custom ecosystem premiums. Partnerships like the 6GW OpenAI deal and Oracle's 50K-unit order further drive volume discounts, reducing per-unit costs for hyperscalers. ~Energy and Cooling Savings: Power is the biggest TCO driver, accounting for 40-60% of lifetime costs in AI clusters. MI450's estimated 1,200W TGP (versus Rubin's 2,300W) translates to ~48% lower draw per GPU, cutting annual electricity bills by up to $500K per 1,000-unit rack at $0.10/kWh. Cooling follows suit: AMD's chiplet-based Helios racks (72 GPUs) require less dense liquid-cooling setups than NVIDIA's NVL144 (144 GPUs), avoiding costly retrofits for existing data centers. TSMC benchmarks show 2nm nodes yielding 20-30% better perf/Watt, amplifying these savings in memory-bound tasks like LLM inference. ~Higher Density and Scalability: MI450's Infinity Fabric enables denser racks (up to 128 GPUs in IF128 configs) with 1.4 PB/s aggregate bandwidth, delivering 6.4 EFLOPS FP4 nearly double Rubin's 3.6 EFLOPS in equivalent space. This means fewer racks for the same throughput, slashing deployment costs by 15-25%. AMD's UALink compatibility also future-proofs against vendor lock-in, reducing long-term refresh expenses. How MI450 Consumes Nearly Half the Power? The MI450's ~1,200W Thermal Design Power (TGP) is indeed about half of Rubin's 2,300W, a deliberate design choice rooted in AMD's advanced process node, chiplet architecture, and workload optimization. This isn't just smaller transistors it's a holistic efficiency play that avoids NVIDIA's power escalations to chase raw FLOPS. ~Superior Process Node: MI450's core compute dies use TSMC's 2nm (N2P) node, versus Rubin's full-chip 3nm (N3P). The 2nm shrink delivers ~1.15x transistor density and 20-30% better power efficiency per TSMC data, allowing AMD to hit 40 PFLOPS FP4 at lower voltages/clocks. NVIDIA's redesigns bumping TGP by 500W to counter MI450 pushed Rubin to 2,300W for marginal gains in bandwidth (20 TB/s vs. MI450's 19.6 TB/s) ~Chiplet Modularity: AMD's multi-die approach (separate accelerator core, interposer, and media dies) isolates power-hungry elements, enabling fine-grained scaling. Only the core needs 2nm; others use cost-effective 3nm, reducing overall leakage and dynamic power by 15-25% compared to NVIDIA's monolithic (or early chiplet) Rubin. This modularity also cuts thermal hotspots, allowing sustained boosts without thermal throttling. ~Architecture and Workload Focus: CDNA 5 prioritizes dense matrix ops for AI ( FP4/FP8) with fewer overhead cycles than Rubin's tensor-heavy design, which inflates power for peak training bursts. In rack-scale, Helios's Infinity Fabric (IF64/128) offloads interconnect power to fabric links, versus NVLink 6's GPU-centric draw. AMD's ROCm optimizations yield 4x efficiency gains over MI300X in inference, where power scales linearly with model size MI450 handles 432 GB HBM4 without NVIDIA's sharding penalties that spike energy use. Conclusion: As the 2026 AI landscape crystallizes, AMD's Instinct MI450 emerges not as a mere challenger but as a transformative force, poised to erode NVIDIA's 95% market stranglehold through unmatched TCO efficiency and power frugality. By harnessing TSMC's 2nm edge and chiplet ingenuity, MI450 delivers Rubin-caliber performance 40 PFLOPS FP4, 432 GB HBM4 at half the power (1,200W vs. 2,300W) and 20-40% lower lifetime costs, making it the go-to for inference-dominated hyperscalers like $META (42% allocation), @OpenAI (6GW commitment) and Oracle (50K units). NVIDIA's Rubin retains software supremacy and training prowess via CUDA's moat, but AMD's Helios racks boasting 1.4 PB/s bandwidth and 6.4 EFLOPS density signal a "Milan moment" per AMD management, potentially flipping 20-30% share by 2026 amid HBM4 crunches
5
23
3
172
金沙枣 retweeted
When a man controls his lust, he becomes unstoppable ‼️‼️
金沙枣 retweeted
加拿大温哥华的市长刚刚用 #比特币 从自动售货机买了一瓶可口可乐,他希望市政府购买比特币作为储备金,很有远见
金沙枣 retweeted
🇺🇸 Michael Saylor says Bitcoin’s quantum threat is still 10–20 years out “By then, Bitcoin will upgrade like everything else” “We’ll just upgrade the software, Joe”
“When stocks are attractive, you buy them. Sure, they can go lower—I’ve bought stocks at $12 that went to $2, but then they went to $30.” — Peter Lynch
2
35
4
281