19%er ~ I did not, in fact, read the documentation. Redstick. Viking. Redneck. #aliketv #DFIR #OSINT #Cyberz 🎶 I don't run from nothing, Dawg 🎶

1 | 0
Joined November 2017
1
6
0
Can't imagine why. <facepalm>
Okay. That's it. Y'all are right. I guess I really am psychic.
1
1
7
Wontonimo Bae .🤍. retweeted
I tell myself this every day. 💕
Wontonimo Bae .🤍. retweeted
China-linked threat group Storm-1849 (ArcaneDoor) spent October exploiting @Cisco ASA zero-days targeting U.S. defense, finance & government orgs. Even after the @CISAgov’s patch order, attacks persisted. #cybersecurity #CISO #infosec #ITsecurity bit.ly/47WJRo5
1
2
2
It’s not having a list of rules, I’ll tell you that much.
Replying to @LibertarianG0th
Do u even know what the punk movement is
actual reply here:
A Chicago area couple logged into their retirement account only to find out it had been hacked, and a large chunk of their retirement savings was gone. The response from the online brokerage firm is only adding insult to injury. cbsnews.com/chicago/news/cou…
2
12
Earl Gray. Little lavender, little vanilla. Trust me.
5
18
GIF
Wontonimo Bae .🤍. retweeted
🙏🏼 🇮🇱
BREAKING: The IDF has recovered the body of Hadar Goldin — held by Hamas for 11 years. No, it didn’t start on October 7th.
1
2
The road to precedent, what does it look like?
A Chicago area couple logged into their retirement account only to find out it had been hacked, and a large chunk of their retirement savings was gone. The response from the online brokerage firm is only adding insult to injury. cbsnews.com/chicago/news/cou…
1
1
9
Wontonimo Bae .🤍. retweeted
#Doomflowers Update It's a tool with superpowers that is used by humans with low moral character and too much power in our Gov't and too much impact on our daily perceptions.....aren't we glad Biden took control of AI?
AI's Dual Role in the Intelligence Community: Solving Cases vs. Framing Innocents Artificial intelligence has become a powerful tool for intelligence and law enforcement agencies, enabling rapid analysis of vast datasets to crack complex cases. However, the same technologies—such as facial recognition and deepfake generation—can be weaponized to fabricate evidence, leading to wrongful accusations and miscarriages of justice. Below, I'll outline real-world examples of both applications, drawing from documented cases involving agencies like Homeland Security Investigations (HSI) and local police forces that collaborate with federal intelligence. Examples of AI Helping Solve Cases AI excels at processing unstructured data like DNA profiles, surveillance footage, and online traces, often reviving stalled investigations. Golden State Killer Case (2018): The Los Angeles Police Department used AI-powered genetic genealogy tools on the GEDmatch platform to analyze DNA from crime scenes and match it against public databases. This built a family tree that identified suspect Joseph James DeAngelo Jr., leading to his arrest and guilty plea for 26 murders after decades of unsolved cases. HSI's Facial Recognition for Child Exploitation (2023): Homeland Security Investigations collaborated with U.K. police on a cold case involving child abuse imagery. AI facial recognition software scanned databases from thousands of cases, identifying the suspect and enabling his arrest within two weeks. This initiative has since helped identify hundreds of victims and perpetrators in archived cases, though AI matches require human verification for legal use. Georgia Police's Cybercheck AI for Homicides and Trafficking: The Warner Robins Police Department employs Cybercheck, an AI tool that aggregates open-source internet data (e.g., social media, IP addresses, and location mapping) to create "CyberDNA" profiles. It has contributed to solving 209 homicide cases, 107 cold missing persons cases, 88 child pornography investigations, and 37 human trafficking cases across multiple states, including Georgia, by generating leads in roadblocked probes. Somerset Police's Evidence Summarization Project (Ongoing): U.K.'s Somerset Police piloted an AI system to review and summarize evidence from 27 cold cases, completing the task in 30 hours—versus 81 years manually. While no full resolutions are public yet, it has streamlined resource allocation for deeper human-led follow-ups. These tools, often integrated into broader intelligence workflows (e.g., via the Department of Justice's AI applications for surveillance and forensics), demonstrate AI's efficiency in pattern detection and lead generation. Examples of AI Being Used to Frame Innocent People Conversely, AI's flaws or malicious applications have led to false positives in identification or fabricated media that mimics evidence, disproportionately affecting marginalized groups and eroding trust in investigations. Facial Recognition Misidentifications Leading to Wrongful Arrests: At least seven documented cases involve AI facial recognition errors by police, six targeting Black individuals. In 2020, Robert Williams was arrested in his driveway for a watch theft based on a blurry surveillance photo mismatched to his driver's license; he was detained for 30 hours before release. Similar errors ensnared Nijeer Parks (2020, Woodbridge, NJ shoplifting accusation), Porcha Woodruff (2023, Chicago theft probe while pregnant), Michael Oliver (2020, Detroit assault claim), Randall Reid (2023, Florida theft), and Alonzo Sawyer (2019, D.C. robbery)—all cleared after alibis emerged, highlighting biases in AI trained on skewed datasets. Deepfake CCTV Fabrication Risks in Trials: Lawyers like Jerry Buting (from the Making a Murderer case) warn that AI can alter CCTV footage to depict innocents committing crimes, such as swapping faces onto video of a theft or assault. In a hypothetical but plausible scenario echoing the BBC drama The Capture, manipulated "evidence" could convict someone based on irrefutable-looking fakes, especially since prosecutors often out-resource defenses. Detection via metadata is possible but lags behind AI's evolution, potentially leading to more planted-evidence frames like Steven Avery's disputed 2005 murder case. Rashmika Mandanna Deepfake Video (2023): An AI-generated video superimposed Indian actress Mandanna's face onto a British influencer's body in a revealing elevator scene, going viral and sparking harassment. While not a formal arrest, it illustrates how deepfakes can "frame" individuals for scandalous behavior, damaging reputations and inviting legal scrutiny—Indian authorities investigated, but the creator remains at large. Taylor Swift Explicit Deepfakes (2024): AI-fabricated pornographic images of the singer spread on X and Reddit, amassing millions of views and prompting platform bans. This non-consensual "framing" as a sexual figure led to privacy invasions and calls for regulation, showing how deepfakes can escalate to defamation suits or public shaming that mimics criminal accusation. In intelligence contexts, deepfakes pose risks for disinformation campaigns (e.g., by foreign actors framing dissidents), while facial recognition biases amplify systemic errors. Mitigation efforts include AI detection tools and ethical guidelines from bodies like the DOJ, but the technology's accessibility heightens vulnerabilities.
1
3
What is happening?
Fixing the budget beats finding more buyers for American debt. It would avoid the danger of crisis and make the country’s reserve-currency status more likely to endure economist.com/leaders/2025/1…
1
1
6
Waiting tables and doing yoga regularly (PE class), but I think any light cardio is good enough. Walking a lot, bending and lifting a little is what I was doing. It still works, last time I tried, but yoga and cardio at least 5x a week is required to make it go really fast.
What's the fastest way
1
6
spook goop and spooky goop are really close tbf to whatever nerd sent you that.
> make post saying tired of AI slop malware > say want to see spooky goop > get sent source code to Lockbit 5 I said "spooky goop". I did not say "the source code to something that an adversary of the United States government is currently using to perform ransomware attacks against critical infrastructure" Spooky goop is usually like, interesting malware stuff. What I have received is more akin to "dangerous goop". Generally speaking, I do not like dangerous goop because dangerous goop is dangerous.
3
Wontonimo Bae .🤍. retweeted
Replying to @wontonimobae
Wontonimo Bae .🤍. retweeted
To be honest, it takes a pretty brave man to go up in a Mig-21 in 2025.
“Killing of Christians: ‘The Nigeria Air Force is ready to face President Trump and the America military. We control our skies and no one will enter our airspace,’ -a Nigerian Air Force officer wrote on TikTok while posting this video.
3
3
17
Wontonimo Bae .🤍. retweeted
15 year old me never got over the writers not letting us have them stay together. 😔💔
Lee Majors and Lindsay Wagner 1978 and 2017.
2
1
14
You can hate on it, but that doesn't make it not good.
This tweet is unavailable
2
1
15