Decentralized Finance (DeFi) has promised to deliver a novel infrastructure that allows for the creation of financial services that do not rely on centralized, tightly controlled institutions. The current status of DeFi is a dynamic (if not chaotic) environment in which new infrastructure components and protocols are routinely introduced to provide new services.
However, it is not yet clear if the advantages introduced by DeFi applications outweigh the risks of participating in this highly unregulated market.
Multi-million heists, widespread fraud, unchecked speculation, and devastating social engineering attacks have demonstrated that DeFi has a dark side. In this talk, we will provide an overview of how DeFi protocols are abused and misused to support cybercrime, and what can (and cannot) be done to combat this massive problem.
In addition, we will provide specific examples of current research in identifying vulnerabilities in smart contracts and DeFi protocols.
Giovanni Vigna is a Professor in the Department of Computer Science at the University of California in Santa Barbara and also the Sr. Director of Threat Intelligence at VMware.
His research interests include DeFi security, malware analysis, vulnerability assessment, the underground economy, firmware security, web security, and the applications of AI to security problems. Giovanni Vigna is also the founder of the Shellphish hacking group, which has participated in more DEF CON CTF competitions than any other group in history. He is an IEEE Fellow and an ACM Fellow.
Transition To Practice, They Say: How Two Decades of Security Research Ultimately Spawned a Silicon Valley Startup
In academia it can require perseverance and patience to see your research gaining real-world traction. In this presentation we will recap the journey that turned the open source network security monitor Zeek (formerly Bro) from a little known research platform into a powerful operational security framework that’s now helping protect some of the largest, and most critical, networks in the world. Over a period of more than two decades, Zeek went through a series of quite distinct phases (and a couple of near-death experiences) that, in hindsight, all proved critical to exploit the full potential of the original technology. Today, the Zeek project is thriving more than ever: An active open source community continues to extend the system’s capabilities, while a venture-backed startup founded by Zeek’s creators provides turn-key products to large enterprises and government organizations.
Robin Sommer is a Co-Founder at Corelight, a San Francisco-based security startup providing open NDR solutions based on Zeek. He has been wearing various hats at Corelight over time: CTO, Head of Engineering, Open Source Lead. For many years Robin used to lead the development team behind Zeek as well. Before starting Corelight, Robin was a Senior Researcher at the International Computer Science Institute (ICSI) in Berkeley, California, where he led a range of research projects on network security and privacy. He has a doctorate from Technical University of Munich, Germany, and is now back living in Munich as well.
Picture: Fuhrmann/TU Braunschweig
When Papers Choose their Reviewers: Adversarial Machine Learning in Peer Review.
The number of papers submitted to scientific conferences is steadily rising in many disciplines. To handle this growth, systems for automatic paper-reviewer assignments are increasingly used during the reviewing process. These systems employ statistical topic models to characterize the papers’ content and automate their assignment to reviewers. In this talk, we investigate the security of this automation and introduce a new attack that modifies a given paper so that it selects its own reviewers. Our attack is based on a novel optimization strategy that fools the topic model with unobtrusive changes to the paper’s content. In an empirical evaluation with a (simulated) conference, our attack successfully selects and removes reviewers, while the tampered papers remain plausible and often indistinguishable from innocuous submissions.
Konrad Rieck is a professor at TU Berlin, where he leads the Chair of Machine Learning and Security as part of the Berlin Institute for the Foundations of Learning and Data. Previously, he held academic positions at TU Braunschweig, the University of Göttingen, and Fraunhofer Institute FIRST. His research focuses on the intersection of computer security and machine learning. He has published over 100 papers in this area and serves on the PCs of the top security conferences (system security circus). He has been awarded the CAST/GI Dissertation Award, a Google Faculty Award, and an ERC Consolidator Grant.