👨🏻‍💻 Marketing Technologist specialising in WordPress and Web3.

New Zealand
Joined September 2008
After creating my first #Ethereum token contract, I feel like this could be the catalyst for a Cambrian Explosion in #CryptoCurrency.
3
2
9
Ah an email from my close personal friend… Or a short reminder of why you should always test your marketing emails.
Measure what matters
1
Ðaniel ₩onder retweeted
you can't self-custody a stablecoin btw the central issuer of the stable can lock you out even while you hold it in your wallet this is, obviously, not possible with cash not to FUD stablecoins, which obviously have their use but this must be acknowledged
Hot Take: Unlimited data that switches to dial-up speed when you reach a cap is not actually unlimited.
1
What happens if ChatGPT uses Wikipedia as a source but Wikipedia content is created by ChatGPT? Do they hit an infinite loop of AI hallucinogens?
1
Ðaniel ₩onder retweeted
.@ensdomains doesn’t get nearly enough credit as a better identity system than anything else that exists From just vitalik.eth you can send payments, visit his site, see his avatar He can cryptographically prove all of it and no one can take it away or censor it
DJ was jus scratching the decks and realised how long it’s been since I heard that in the digital age.
1
1
Fully Decentralised Remote First Company Hiring: Must be located in New York 😵‍💫
The number nerd in me wishes I tested ChatGPT Atlas yesterday when I first downloaded it...
I've been meaning to check out @echodotxyz from @cobie for a while. With the recent @coinbase acquisition and @megaeth_labs launch approaching, now is probably a good a time as any. gMega ∑:
Dear AI, let's take this slow and test every step. AI: Great plan! Let me just create 1000 files, and then get stuck in a loop trying to find a bug inside one of them. Which one? Not sure.
1
Ðaniel ₩onder retweeted
Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single, dependency-minimal codebase. You boot up a cloud GPU box, run a single script and in as little as 4 hours later you can talk to your own LLM in a ChatGPT-like web UI. It weighs ~8,000 lines of imo quite clean code to: - Train the tokenizer using a new Rust implementation - Pretrain a Transformer LLM on FineWeb, evaluate CORE score across a number of metrics - Midtrain on user-assistant conversations from SmolTalk, multiple choice questions, tool use. - SFT, evaluate the chat model on world knowledge multiple choice (ARC-E/C, MMLU), math (GSM8K), code (HumanEval) - RL the model optionally on GSM8K with "GRPO" - Efficient inference the model in an Engine with KV cache, simple prefill/decode, tool use (Python interpreter in a lightweight sandbox), talk to it over CLI or ChatGPT-like WebUI. - Write a single markdown report card, summarizing and gamifying the whole thing. Even for as low as ~$100 in cost (~4 hours on an 8XH100 node), you can train a little ChatGPT clone that you can kind of talk to, and which can write stories/poems, answer simple questions. About ~12 hours surpasses GPT-2 CORE metric. As you further scale up towards ~$1000 (~41.6 hours of training), it quickly becomes a lot more coherent and can solve simple math/code problems and take multiple choice tests. E.g. a depth 30 model trained for 24 hours (this is about equal to FLOPs of GPT-3 Small 125M and 1/1000th of GPT-3) gets into 40s on MMLU and 70s on ARC-Easy, 20s on GSM8K, etc. My goal is to get the full "strong baseline" stack into one cohesive, minimal, readable, hackable, maximally forkable repo. nanochat will be the capstone project of LLM101n (which is still being developed). I think it also has potential to grow into a research harness, or a benchmark, similar to nanoGPT before it. It is by no means finished, tuned or optimized (actually I think there's likely quite a bit of low-hanging fruit), but I think it's at a place where the overall skeleton is ok enough that it can go up on GitHub where all the parts of it can be improved. Link to repo and a detailed walkthrough of the nanochat speedrun is in the reply.
First impressions are not good 😬 404 errors and generic ChatGPT style image wrappers. Good luck team!
Ðaniel ₩onder retweeted
"Marketing" as a catch-all term is meaningless and unspecific. Here's a complete taxonomy of go-to-market functions. Every time you hire into your GTM function, go through this list and rate the candidate’s experience with each item:
Fleek is the core of my IPFS decentralised hosting. Earlier this year they pivoted to AI agents. I actually used their first release to prototype a prize-winning agent for onchain analysis. Now they've built an entire agentic platform. Interested to see the evolution.
$FLK TGE - Oct 14, 2025 Fleek: The Fantasy AI Social App Where Everyone Wins Let the dreams begin. More soon.
1
1