software engineer || Developer || HTML, JavaScript, /Always learning, Always Coding

Las Vegas, NV
Joined January 2023
maya 🦄✨ retweeted
8
2
79
maya 🦄✨ retweeted
Deleting code be like 😂
maya 🦄✨ retweeted
Build Telegram bots with Node.js
1
16
167
maya 🦄✨ retweeted
Web components are a set of standardized browser APIs that let you build custom, reusable HTML components. You can use them to create self-contained components that work across frameworks - or even without a framework. Here, Mark teaches you about custom elements and events, data communication, and how to build an app that uses web components. freecodecamp.org/news/a-brie…
1
30
262
maya 🦄✨ retweeted
20 Data Structures Image Credit: Shalini Goyal
2
43
197
maya 🦄✨ retweeted
🤔🚀 Comment your answers below! 👇
19
8
78
maya 🦄✨ retweeted
The final prize of this year's Giveaway Week will be a @RAKwireless Blues ONE development kit. cnx-software.com/2025/11/09/… It's an Arduino-programmable LoRaWAN, LTE-M, and NB-IoT devkit that can be useful for industrial automation and asset-tracking applications. As usual, it's a global contest, and you can enter by commenting on the post linked above (after reading the contest rules there)
1
9
57
maya 🦄✨ retweeted
Hey fellow Linux users 🐧🐧 Does it look like this on your keyboard, too? 👇 😂
maya 🦄✨ retweeted
📣 Our friends at @TuxCare_ need your help! Please take a few minutes to take part in their annual industry survey. Your insights will help shape TuxCare's 2026 Year-in-Review report on the evolving open source and enterprise Linux landscapes. The survey closes December 1st. Take it here: tuxcare.com/open-source-surv…
6
10
GIF
No-as-a-Service (NaaS): The API That Says "No" So You Don’t Have To.
50
643
39
7,394
maya 🦄✨ retweeted
The best part of a project built by volunteers is that anyone can volunteer, from high schoolers to professors and anyone in between! High schoolers have written complex C code and hand written assembly in FFmpeg.
maya 🦄✨ retweeted
Maybe it should be a feature 😂
22
9
1
106
maya 🦄✨ retweeted
thatsItTheWholeOfMathematicsIsSolved teddit.net/comments/1or9l01
1
6
38
maya 🦄✨ retweeted
Programming doesn’t need to be difficult 🤷🏻‍♂️
maya 🦄✨ retweeted
The sixth prize of Giveaway Week 2025 is VOIPAC's iMX93 Pro industrial development kit cnx-software.com/2025/11/08/… It's based on a system-on-module powered by @NXP iMX93 Arm Cortex-A55/M33 Edge AI SoC with 2GB LPDDR4, and 16GB eMMC flash. The development kit also offers dual Gigabit Ethernet, WiFi 6, and Bluetooth 5.3 connectivity, HDMI, DisplayPort, and HDMI display interfaces, three audio jacks, two MIPI CSI camera interfaces, various USB ports, M.2 sockets, and more. Alternatively, you can select a similar, newer iMX91 devkit. The company provides Linux built with Yocto Project 5.0 Scarthgap. To enter the contest, comment on the blog post linked above.
2
17
114
maya 🦄✨ retweeted
Here is a great example of how you can contribute to FFmpeg without code. Our friend @kamedo2 has been organising audio quality listening tests for decades, providing valuable audio quality information.
Replying to @kamedo2
とりあえず仮置きでFFmpegのコマンドとグラフの枠のみ作った。作ってみたら $ ffmpeg -i in.wav -c:a libopus -b:a 96k out.opus のビットレートがちょっと大きすぎるのでフェアな比較になるようにこれから調節しよう。Microsoftの-c:a aac_mfは微調整できない仕様。
4
21
427
maya 🦄✨ retweeted
Pre-training Objectives for LLMs ✓ Pre-training is the foundational stage in developing Large Language Models (LLMs). ✓ It involves exposing the model to massive text datasets and training it to learn grammar, structure, meaning, and reasoning before it is fine-tuned for specific tasks. ✓ The objective functions used during pre-training determine how effectively the model learns language representations. → Why Pre-training Matters ✓ Teaches the model general linguistic and world knowledge. ✓ Builds a base understanding of syntax, semantics, and logic. ✓ Reduces data requirements during later fine-tuning. ✓ Enables the model to generalize across multiple domains and tasks. → Main Pre-training Objectives 1. Causal Language Modeling (CLM) ✓ Also known as Autoregressive Training, used by models like GPT. ✓ Objective → Predict the next token given all previous tokens. ✓ Example: → Input: “The sky is” → Target: “blue” ✓ The model learns word sequences and context flow — ideal for text generation and completion. ✓ Formula (simplified): → Maximize P(w₁, w₂, ..., wₙ) = Π P(wᵢ | w₁, ..., wᵢ₋₁) 2. Masked Language Modeling (MLM) ✓ Introduced with BERT, a bidirectional training objective. ✓ Objective → Predict missing words randomly masked in a sentence. ✓ Example: → Input: “The [MASK] is blue.” → Target: “sky” ✓ Allows the model to see context from both left and right, capturing deeper semantic relationships. ✓ Formula (simplified): → Maximize P(masked_token | visible_tokens) 3. Denoising Autoencoding ✓ Used by models like BART and T5. ✓ Objective → Corrupt the input text (e.g., mask, shuffle, or remove parts) and train the model to reconstruct the original sentence. ✓ Encourages robust understanding and recovery of meaning from noisy or incomplete inputs. ✓ Example: → Input: “The cat ___ on the mat.” → Target: “The cat sat on the mat.” 4. Next Sentence Prediction (NSP) ✓ Used alongside MLM in early BERT training. ✓ Objective → Predict whether one sentence logically follows another. ✓ Example: → Sentence A: “He opened the door.” → Sentence B: “He entered the room.” → Label: True ✓ Helps the model learn coherence and discourse-level relationships. 5. Permutation Language Modeling (PLM) ✓ Used by XLNet, combining autoregressive and bidirectional learning. ✓ Objective → Predict tokens in random order rather than fixed left-to-right. ✓ Enables the model to capture broader context and dependencies without masking. 6. Contrastive Learning Objectives ✓ Used in multimodal and instruction-based pretraining. ✓ Objective → Maximize similarity between semantically related pairs (e.g., a caption and its image) and minimize similarity between unrelated pairs. ✓ Builds robust cross-modal and conceptual understanding. → Modern Combined Objectives ✓ Modern LLMs often merge multiple pre-training objectives for richer learning. ✓ Example: → T5 uses denoising + text-to-text generation. → GPT-4 expands causal modeling with instruction-tuned objectives and reinforcement learning (RLHF). ✓ These hybrid objectives enable models to perform a wide range of generative and comprehension tasks effectively. → Quick tip ✓ Pre-training objectives teach LLMs how to predict, reconstruct, and reason over text. ✓ CLM → next-word prediction. ✓ MLM → masked token recovery. ✓ Denoising & NSP → structure and coherence. ✓ Contrastive → cross-domain learning. ✓ Together, they form the foundation for the deep understanding and fluency that define modern LLMs. 📘 Grab this ebook to Master LLMs : codewithdhanian.gumroad.com/…
maya 🦄✨ retweeted
Linux Mint latest updates include an expanded System Information tool, new System Administration features, LMDE 6 end-of-life details, and the new XSI icons project alternativeto.net/news/2025/…
4
11
69
maya 🦄✨ retweeted
Train, serve, and manage your AI/ML models on GKE with our new and improved documentation → goo.gle/43iYUG6 Whether you're training large foundation models or building a comprehensive AI platform, GKE offers the control and performance you need.
2
16
103