Backend & Low-Level Dev | Systems Architect | Code Optimizer 👩‍💻

Missouri, USA
Joined August 2024
Let’s grow together follow for a follow back 📈
15
4
22
Anna🎉 retweeted
50 acrónimos y términos técnicos que cualquier programador debería conocer. De SPA a CI/CD pasando por SOLID y TDD. Los vas a ver en PRs, reuniones y documentación. Guárdalo. Te vas a cruzar con todos:
1
76
523
Anna🎉 retweeted
Data Science
Anna🎉 retweeted
You can’t solve a problem with the same mindset that caused it. – Albert Einstein
15
112
428
Want to see how AI really works in finance? Join my LIVE WEBINAR— 10 AI Practical Cases for Finance: luma.com/tgvzynlu I’ve mapped 21 real ChatGPT use cases showing how financial analysts can automate work, uncover insights, and focus on strategy instead of spreadsheets. 📊 Modeling. Build fully linked financial models, automate DCFs, run sensitivity analyses — in minutes, not hours. 📈 Reporting. Turn raw data into management commentary, board packs, and KPI dashboards instantly. 💰 Budgeting. Generate templates, simulate cost scenarios, and review budgets faster and more accurately. 🔮 Forecasting. Automate rolling forecasts, simulate what-if cases, and test forecast accuracy effortlessly. Each of these use cases replaces repetitive manual work with structured thinking — and gives analysts back the time to focus on strategy. P.S. Stop using ChatGPT like Google, learn advanced techniques! If you want this visual in PDF, just drop a comment and I’ll send it to you. (Important: follow me so I can DM you!)
4
12
52
Anna🎉 retweeted
International Dividend ETF Cheat Sheet
7
17
79
Anna🎉 retweeted
Minimal setups are so underrated Why ?
Anna🎉 retweeted
Pre-training Objectives for LLMs ✓ Pre-training is the foundational stage in developing Large Language Models (LLMs). ✓ It involves exposing the model to massive text datasets and training it to learn grammar, structure, meaning, and reasoning before it is fine-tuned for specific tasks. ✓ The objective functions used during pre-training determine how effectively the model learns language representations. → Why Pre-training Matters ✓ Teaches the model general linguistic and world knowledge. ✓ Builds a base understanding of syntax, semantics, and logic. ✓ Reduces data requirements during later fine-tuning. ✓ Enables the model to generalize across multiple domains and tasks. → Main Pre-training Objectives 1. Causal Language Modeling (CLM) ✓ Also known as Autoregressive Training, used by models like GPT. ✓ Objective → Predict the next token given all previous tokens. ✓ Example: → Input: “The sky is” → Target: “blue” ✓ The model learns word sequences and context flow — ideal for text generation and completion. ✓ Formula (simplified): → Maximize P(w₁, w₂, ..., wₙ) = Π P(wᵢ | w₁, ..., wᵢ₋₁) 2. Masked Language Modeling (MLM) ✓ Introduced with BERT, a bidirectional training objective. ✓ Objective → Predict missing words randomly masked in a sentence. ✓ Example: → Input: “The [MASK] is blue.” → Target: “sky” ✓ Allows the model to see context from both left and right, capturing deeper semantic relationships. ✓ Formula (simplified): → Maximize P(masked_token | visible_tokens) 3. Denoising Autoencoding ✓ Used by models like BART and T5. ✓ Objective → Corrupt the input text (e.g., mask, shuffle, or remove parts) and train the model to reconstruct the original sentence. ✓ Encourages robust understanding and recovery of meaning from noisy or incomplete inputs. ✓ Example: → Input: “The cat ___ on the mat.” → Target: “The cat sat on the mat.” 4. Next Sentence Prediction (NSP) ✓ Used alongside MLM in early BERT training. ✓ Objective → Predict whether one sentence logically follows another. ✓ Example: → Sentence A: “He opened the door.” → Sentence B: “He entered the room.” → Label: True ✓ Helps the model learn coherence and discourse-level relationships. 5. Permutation Language Modeling (PLM) ✓ Used by XLNet, combining autoregressive and bidirectional learning. ✓ Objective → Predict tokens in random order rather than fixed left-to-right. ✓ Enables the model to capture broader context and dependencies without masking. 6. Contrastive Learning Objectives ✓ Used in multimodal and instruction-based pretraining. ✓ Objective → Maximize similarity between semantically related pairs (e.g., a caption and its image) and minimize similarity between unrelated pairs. ✓ Builds robust cross-modal and conceptual understanding. → Modern Combined Objectives ✓ Modern LLMs often merge multiple pre-training objectives for richer learning. ✓ Example: → T5 uses denoising + text-to-text generation. → GPT-4 expands causal modeling with instruction-tuned objectives and reinforcement learning (RLHF). ✓ These hybrid objectives enable models to perform a wide range of generative and comprehension tasks effectively. → Quick tip ✓ Pre-training objectives teach LLMs how to predict, reconstruct, and reason over text. ✓ CLM → next-word prediction. ✓ MLM → masked token recovery. ✓ Denoising & NSP → structure and coherence. ✓ Contrastive → cross-domain learning. ✓ Together, they form the foundation for the deep understanding and fluency that define modern LLMs. 📘 Grab this ebook to Master LLMs : codewithdhanian.gumroad.com/…
15
34
242
Anna🎉 retweeted
AWS skills that interviewers actually care about.
1
61
376
LLM-powered subdomain enumeration tool.⚔️ - github.com/samogod/samoscout… #infosec #cybersec #bugbountytips
3
22
Anna🎉 retweeted
Was asked in JP Morgan :
14
8
1
112
Anna🎉 retweeted
20 AI Use Cases for Finance AI is no longer “nice to have” in finance — it’s already transforming how we analyze, forecast, and report. Here’s what you can do with AI today: Financial Statement Analysis – spot trends, ratios, and red flags instantly
Anna🎉 retweeted
How to set priorities
9
54
Anna🎉 retweeted
10
5
38
Anna🎉 retweeted
This is what a Senior SWE at Bloomberg's GitHub profile looks like
This is what a nix commiter's GitHub profile looks like.
Anna🎉 retweeted
The most effective AI Agents are built on these core ideas. It's what powers Claude Code. It's referred to as the Claude Agent SDK Loop, which is an agent framework to build all kinds of AI agents. (bookmark it) The loop involves three steps: Gathering Context: Use subagents (parallelize them for task efficiency when possible), compact/maintain context, and leverage agentic/semantic search for retrieving relevant context for the AI agent. Hybrid search approaches work really well for domains like agentic coding. Taking Action: Leverage tools, prebuilt MCP servers, bash/scripts (Skills have made it a lot easier), and generate code to take action and retrieve important feedback/context for the AI agent. Turns out you can also enhance MCP and token usage through code execution and routing, similar to how LLM routing increases efficiency in AI Agents. Verifying Output: You can define rules to verify outputs, enable visual feedback (this becomes increasingly important in multimodal problems), and consider LLM-as-a-Judge to verify quality based on fuzzy rules. Some problems will require visual cues and other forms of input to perform well. Don't overcomplicate the workflow (eg, use computer-using agents when a simple Skill with clever scripts will do). This is a clean, flexible, and solid framework for how to build and work with AI agents in all kinds of domains.
How Private Equity Creates Value
17
111
Anna🎉 retweeted
How to protect your job from AI
8
35
Anna🎉 retweeted
The Data Analyst Roadmap
44
223
Anna🎉 retweeted
How to analyze a company in less than 5 minutes: Study these ratios:
9
89
1
389