SIN-02 Security and Innovation Newsletter Feb 2nd 2025

Welcome to the second issue of "Security and Innovation notes" newsletter , where I curate bi-weekly insights on cybersecurity and innovation. Expect a thoughtfully selected mix of articles, from AI advancements to Cybersecurity news and trends, saving you time and introducing you to fresh perspectives.
🛡️What is Reachability Analysis in the context of ASPM (Application Security Posture Management)?
Reachability analysis helps organizations prioritize and manage vulnerabilities by determining which ones are actively exploitable in their applications. This method reduces noise from irrelevant alerts, allowing security teams to focus on real risks, avoiding creating friction in the process with engineering teams. Phoenix Security shares how they enhance this process with advanced techniques that streamline vulnerability management and improve overall security posture. The article focus on how to use contextual deduplication and contextual Cyber threat intelligence to prioritize and turn off the vulnerabilities that are not meant to be. Link
🤖 🛡️ A brief guide for dealing with Humanless SOC idiots
Anton Chuvakin argues that a fully automated Security Operations Center (SOC) without humans is unrealistic due to the need for human expertise and adaptability in dealing with complex security threats. Many people mistakenly believe in "humanless SOC" due to ignorance about how security operations work and the limitations of current AI technology. I believe we will have less-humansSOC, where engineers will be augmented by AI, reducing the need escale by adding more people into the teams. Link
🤖 🛡️The Agentic SIEM:
To follow up on AI agents in SOC, Jack Naglieri provides an overview on how can we adopt Agentic AI in SIEMs. Unlike traditional automation, these agents can adapt, learn, and make informed decisions, enhancing the efficiency of security teams. This partnership between AI and human analysts aims to improve security outcomes, allowing humans to focus on more complex challenges. This focus on augmented security teams with AI, supporting Anton argument of the need for humans in the loop overseeing the Agents. Link
🤖 🛡️ Deepseek R1 leaks and Local models
While half of the world is rushing to try Deepseek R1, Wiz Research found an exposed DeepSeek database that leaked sensitive information, including chat history and API keys. The database was publicly accessible without any authentication, posing serious security risks. This incident highlights the need for better security practices as AI technologies continue to rapidly evolve. Going fast has its consequences. They exposed Clickhouse databases in port 9000. Link

🤖 Everyone is talking about trying Deepseek R1 on their local machines, but reality is that the ones being tested are distilled versions of other LLMs trained with Deepsek R1.
The full DeepSeek R1 model, with 671 billion parameters, requires substantial hardware for local deployment—at least 200-500 GB combined RAM and VRAM. Most discussions of "local R1" actually involve smaller distilled versions, which lack the full model's capabilities and are more practical for typical users. A distilled model is a smaller, more efficient version of a larger AI model that retains much of the original’s knowledge and capabilities while being more suitable for deployment on less powerful hardware. It’s created through a process called knowledge distillation, where the smaller model is trained to mimic the behavior of the larger, more complex model.
🛡️ Slow death of OCSP
Let’s Encrypt will stop supporting OCSP for certificate revocation checking starting in May 2025, as it has proven ineffective and costly. Instead, the focus will shift to short-lived certificates, which provide better security without the need for traditional revocation methods. This change reflects a broader movement in the industry to simplify certificate validation and improve security practices
"With OCSP virtually over, we’re back to CRLs for revocation checking, but the approach has changed. Instead of user agents consuming the CRLs directly, major browser vendors (and, presumably, operating systems) maintain their proprietary revocation checking built on continuous processing of all known CRLs." Link
Agentic AI:
🤖 Goose: Is an open source AI agent that supercharges your software development by automating coding tasks. Goose is on-machine AI agent, automating engineering tasks seamlessly. Goose is brought to you by Jack Dorsey Block company. Uses Anthropic Model Context Protocol (MCP) to access the computer services. It's great to see how different companies are going at Agents for development tasks. Link Blog article
🤖 IntellAgent: Uncover Your Agent's Blind Spots
IntellAgent is a new open-source system designed to evaluate the performance of conversational AI agents through complex testing scenarios. It creates thousands of unique situations to ensure these AI assistants can handle real-world tasks accurately and reliably. This innovative approach helps improve AI systems across various industries by providing detailed performance analysis and identifying areas for enhancement. Basically uses an Agent to find edge cases on another Agent implementation. Link
🤖 OpenHands/Open.dev (ex-Opendevin): Open source Agents for developers:
Full Agentic development framework: Use AI to tackle the toil in your backlog, so you can focus on what matters: hard problems, creative challenges, and over-engineering your dotfiles. You can try it by just downloading a docker image, and adding an LLM service Api key. Link

Learning:
NVIDIA is offering for a limited time free access to some of their paid AI trainings. Claim yours before the promotion ends. There is one particularly interesting for Security practitioners:
Exploring Adversarial Machine Learning: In this course, which is designed to be accessible to both data scientists and security practitioners, you'll explore the security risks and vulnerabilities that adopting machine learning might expose you to. You will also explore the latest techniques and tools being used by attackers and build some of your own attacks.
Large Language Model Agents MOOC
Very complete training on LLM Agents, with instructors from Berkeley and Deepmind, guest speakers from OpenAI, Deepmind, Google, Meta, Nvidia, Anthropic and more. The course has finished but all the material is available. And they will open sign ups for a new course during spring. Link
Hope you enjoyed and learnt something new
Thank you for reading
Chris