Founder @2ndSightLab | Pentester. Researcher | AWS Security Hero l GSE | Former IANS, SANS faculty | Contact: LinkedIn

Savannah, GA
Joined March 2009
Chronicling my venture into AI here. 🤖 Sept 25 was key post. Started exploring production ready code. Immediately saw the pitfalls and wrote a framework and better context. A month later….have accomplished a lot. No time to write. Follow for updates. medium.com/cloud-security/ar…
2
Today I am running Q CLI with a custom agent as described in the blog posts pinned to my profile. The agent’s context file explains the code the agent can edit and read. And yet, the agent’s context file is trying to access this for no apparent reason: crates/chat-cli/src/cli/mod.rs Put controls around your agents!
Sometimes you need a pentesting team and the management overhead that goes with it. Smaller teams and startups may want that one off person who is more affordable with less overhead and you know exactly who is doing the work. That’s me. 😁 Yeah you can go watch my RSA talk from 2020, read my blog, GitHub, etc. I don’t publish my proprietary tools publicly, however. I’m currently leveraging that fuzzer I wrote in conjunction with AI. This past year it helped me get more coverage on an API that had no Swagger/Open API files. I’ve been working with that customer for about 4 or 5 years now. There are pros and cons of using a larger team or company or a small boutique firm. Both are good for different reasons.
Picking a pentest firm (completely biased but maybe not wrong pov) Look for companies with public contributions. Bug bounty, cve, open source tools, talks, content, etc. Can all be indicators of a solid team. A team that gives back and shares their time with the community. Ask to talk to the pentester(s) who will be doing the work. Ask about their methodology and how they do things. Ask for a sample report. Ask questions about specific findings to see the level of depth/expertise the testers may have.
Let’s Talk AI: What Is an AI Model? How AI Models Work & Are Built ~~ I really like some of the comments in this video: * Should we be using AI for this process or problem? * Should a human be in the loop? * Auditing AI solutions Good explains too. piped.video/watch?v=oi2XDg4s…
1
Sounds right. But I have yet to need an MCP for what I am doing. I’m sure I’ll find a reason eventually.
mcps are changing turns out designing mcps to load every tool definition into the model prompt was a bad idea anthropic’s nov 4 blog post suggests a new pattern treat each mcp server like a normal code library, e.g. typescript modules or files, and let the agent write and run small programs that do two things: discover only what is needed, list a servers directory to see what exists, open just the specific tool files, import only those functions process data locally, call mcp tools from code, then filter, join, and aggregate inside a sandboxed runner so only the small final bits go back to the model doing this dramatically cuts tokens anthropic shows a typical case dropping from 150k tokens down to ~2k (98.7% savings) below a viz showing before/after
1
Teri Radichel #cybersecurity #pentesting retweeted
A penetration tester got root access to our Kubernetes cluster in 15 minutes. Here's what they exploited. The attack chain: - Found exposed Kubernetes dashboard (our bad) - Dashboard had view-only service account (we thought this was safe) - Service account could list secrets across all namespaces - Found AWS credentials in a secret - Used AWS credentials to access EC2 instance profile - Instance profile had full Kubernetes admin via IAM - Used kubectl to create privileged pod - Escaped to node - Root access to entire cluster What we thought we did right: - Dashboard was read-only - Secrets were encrypted at rest - Network policies were in place - Regular security updates What we missed: - Dashboard shouldn't be exposed at all - Service accounts need principle of least privilege - Secrets shouldn't contain AWS credentials (use IRSA instead) - Pod Security Policies weren't enforced - Node access wasn't hardened The fix took 2 weeks: - Removed Kubernetes dashboard entirely - Implemented IRSA for all pod AWS access - Applied strict PSPs/Pod Security Standards - Audit all RBAC permissions - Regular penetration testing Cost: $24K for the pentest Value: Prevented what could have been a catastrophic breach
71
354
34
3,203
Teri Radichel #cybersecurity #pentesting retweeted
I am not balling ... x64dbg MCP is not that great, do your dynamic analysis without AI - it would save you so much time. Claude has no idea how to set proper breakpoints 🥲
I don't mind leveraging AI to help with some reversing tasks, to be honest. What would take me 10 weeks, now takes me 10 days, I have MCP hooked to x64dbg now, so we are balling.
9
7
91
I remember when I was a cloud architect and we were on AWS ECS and Kubernetes was new and starting to become all the rage for reasons I still don’t understand. My team wanted to switch to it. My question was “Why?” They couldn’t give me a good reason that would justify the cost in time and money. In addition, at the time you could not segregate containers and services with proper network controls. Now you can but the separation of concerns between infrastructure and applications is still better with AWS solutions such as ECS and Fargate. It’s easier to say the network team manages this and the application developers manage that and lock it down. As Kubernetes exploded they added some network controls and I was wondering if I was wrong. I added a lab to my cloud class to deploy a cluster. As far as I can tell, I was right and people are finally figuring it out. But I don’t care what you use now. I’d be happy to pentest anything. 😁
Kubernetes migration almost killed our startup. Where we were: - 8 EC2 instances - Ansible for deploys - Boring but working - $1200/month AWS bill Why we migrated: - New investor wanted 'cloud-native' - Engineers wanted K8s experience - Competitors were using it - Seemed like the future 6 months later: - 3 engineers spending full-time on K8s - AWS bill at $4500/month - Deploys took longer than before - More outages, not fewer - Product development stalled We rolled back: - Moved to ECS Fargate - 2 week migration - Back to $1800/month - Engineers back on features K8s is amazing for scale. We weren't at scale. Technology should solve problems you actually have.
2
I’m trying to tell you…see my prior posts and pretty much everything I wrote about batch jobs on my blog is all applicable as well. Put your agents in sandboxes.
Dark Reading | AI Agents Are Going Rogue: Here's How to Rein Them In darkreading.com/cyber-risk/a…
Teri Radichel #cybersecurity #pentesting retweeted
MCP Snitch - macOS app that intercepts and monitors MCP server communications for security analysis, access control, and audit logging for AI tool usage. github.com/Adversis/mcp-snit… #AI #MCP #cybersecurity
6
43
This is interesting. Chrome was supporting an out of date version because sounds like was 1.0 and 3.0 is out. XSLT is super powerful though I do see how it could be abused. You can still use XSLT libraries client side. The way I used it was server side to produce static HTML pages and then published those to a web server, never in the browser. I don’t think JavaScript frameworks really provide the same generic data processing functionality but haven’t looked at them in detail lately.
Great news for browser security (and not just because it cites my XSLT research :)). A lot of younger folks don't even know this feature exists, yet is/was the default attack surface in all major web browsers with a history of exploitation. developer.chrome.com/docs/we…
2
Teri Radichel #cybersecurity #pentesting retweeted
Amazon Cognito user pools now supports private connectivity with AWS PrivateLink Amazon Cognito user pools now supports AWS PrivateLink for secure and private con... aws.amazon.com/about-aws/wha…
1
1
4
Yep.
ChatGPT just helped researchers crack XLoader malware in hours — work that used to take days. AI unpacked the code, found keys, and exposed C2 domains. Big shift for malware analysis. Check this story ↓ thehackernews.com/2025/11/th…
1
Just noticed Werner Vogels’ keynote is at a different time this year at AWS re:Invent if you plan your schedule around that like I do 😉
1
Yes. It’s like trying to cure cancer with all the different types of mutations that can occur. Good luck. I operate in a locked down box, never give it my credentials, and hope for the best.
Multi-Turn Attacks Expose Weaknesses in Open-Weight LLM Models infosecurity-magazine.com/ne… #cybersecurity #infosec #hacking
Seems like botnets have failover and disaster recovery figured out better than a lot of companies hit by ransomware.
2009: FireEye took down the Mega-D botnet by disabling its C&C infrastructure. Unfortunately, within 2 weeks the botnet was operational again at pre-takedown levels. Were botnets sold in bottles in the long, long ago? Nobody knows.
My absolute biggest fear when I was architecting a similar service for a company where I had to argue about why we needed all those security controls, segregation of duties, and an architecture that required a three party collusion to access the data. Yes we needed to implement things securely before we launch. No don’t expose that API just for testing. No one understands what I’m trying to say or believes me until it happens. And I hate being right when it does.
Dark Reading | SonicWall Firewall Backups Stolen by Nation-State Actor darkreading.com/cyberattacks…
3
I see someone has been busy making up buzzwords again. Defense in Depth So glad I don’t have to teach buzzwords anymore and can focus on what actually matters. Definition of Cybersecurity Mesh - Gartner Information Technology Glossary gartner.com/en/information-t…
1
Related to my last post… App that implements consistent code logging to screen and file in two tries… After dinner I touched up the README for the app that tests the log router. I told it to configure and test logging in TEXT format to a SCREEN and FILE logger. I let Q go trusting all actions and it wrote a working program in a matter of minutes that used my log router and it seemed to work. However when I reviewed the code it was using println’s instead of the log router. So I wrote a test to check for no print lines. I forgot to add the signal handler library so I updated the README and told Q to clear cache and check all README instructions again. Boom 💥 It works with 24 tests including my common tests. As it was writing code I was thinking I can add all the boilerplate configuration, error handling, and logging code (for binaries) to new projects so Q doesn’t have to do all that and won’t mess it up. That should save some cycles. I’ll probably do that tomorrow, retest all my projects with the new tests, and implement the AWS resources and tests the rest of the loggers and formats. Cruising now! 🚙🏁
Today’s AI Adventure: 🤖 Today I went through all my READMEs a bunch of times asking Q to find inconsistencies and errors. I have a number of coordinated projects and I wondered if any inconsistencies could be causing problems. The main question I asked after getting the READMEs coordinated and in sync was to ask if anything would prevent me from starting a new project and telling q to implement an application that processes logs with my log router. Doing that uncovered a potential concurrency problem between projects in different threads. Q alerted me to a potential initialization issue. My efforts did not entirely resolve all problems. I still had projects which didn’t follow all the rules in the readmes and that prevented other projects from compiling. I still had projects trying to subvert my rules designed to make sure no errors are ever hidden. I tried to overcome this by asking Q to check the projects over and over until no more issues got reported. I worked on multiple projects at the same time to discover integration issues. When I discovered projects failing to implement things incorrectly or trying to hide errors I added common tests to check for those things and report them. I also added clarification in readmes to be very explicit about certain aspects of the implementation. Because I use traits with rules for implementation I was able to pretty quickly obtain consistency across projects. I do think it sped up development. The odd thing though, is when a bunch of projects implement the same trait according to the exact same instructions and ONE will just go completely off the rails. This time it was my screen logger project, of all things. The screen logger should be the simplest of all loggers. Take a message. Pass it to a formatter. Print it. For some super annoying reason it kept failing to follow the pattern none of the other loggers had problems implementing.😩😩😩😩 The biggest problem was trying to subvert error handling rules. It was as if the screen logger was a rogue employee. An insider threat was trying to implement convoluted code that would not show every error ok the screen. Out of like 30 projects it was nearly the first one I started fixing today (besides the one it was blocking) and was the second to last one I finally got working to unblock the one I started with. The end result is pretty cool though. I asked Q if there were any circular dependencies and it said no and mapped out the flow of the architecture nicely showing a beautifully satisfying hierarchy of dependencies that play nicely together. It’s that same kind of satisfying feeling I’d have when finishing an oil painting as an art major. You just have this feeling when you get it right and it’s beautiful. Everything feels harmonious. Now Q says now I can implement a project and just tell it to use the log router and it will all just work. I need to deploy a few AWS resources and then we’ll find out how try that is tomorrow.🤞