BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Northwest Quantum - ECPv6.15.17.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Northwest Quantum
X-ORIGINAL-URL:https://nwquantum.uw.edu
X-WR-CALDESC:Events for Northwest Quantum
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20250101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20260101T121000
DTEND;TZID=UTC:20260601T130000
DTSTAMP:20260425T173845
CREATED:20251223T050848Z
LAST-MODIFIED:20251223T050848Z
UID:1574-1767269400-1780318800@nwquantum.uw.edu
SUMMARY:Spring 2026 Colloquium Schedule
DESCRIPTION:How to join\nColloquia are held on Thursdays in Webster Physical Sciences Building room 11 on the Pullman campus from 12:10 to 1:00pm. \nColloquia are held in person. For those who cannot attend in person\, please join us on Zoom. \n\nMeeting ID: 965 8240 9398\nPasscode: physastro\n\nPast colloquia can be viewed on our library on YouTube. \nYou can support these events by giving to the Physics Excellence Fund or the S. Towne Stephenson Lectureship.
URL:https://nwquantum.uw.edu/event/spring-2026-colloquium-schedule/
LOCATION:Pullman\, Webster Physical Sciences Building room 11\, Pullman\, WA\, United States
CATEGORIES:Partner Events
ATTACH;FMTTYPE=image/svg+xml:https://nwquantum.uw.edu/wp-content/uploads/2025/09/WSU-lockup-horz-4c.svg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260104
DTEND;VALUE=DATE:20260108
DTSTAMP:20260425T173845
CREATED:20251223T053844Z
LAST-MODIFIED:20251223T053844Z
UID:1607-1767484800-1767830399@nwquantum.uw.edu
SUMMARY:Joint Mathematics Meetings 2026
DESCRIPTION:Join Pacific Northwest National Laboratory at the 2026 Joint Mathematics Meetings (JMM)! JMM is the world’s largest gathering of mathematics experts and professionals. The American Mathematical Society\, in collaboration with many partnering organizations\, will host this exciting annual event in Washington\, D.C. \nPNNL Organized Special Sessions\nAMS Special Session on Mathematics for AI Robustness\, Explainability\, and Safety\, I\nDate: January 4\, 2026 \nOrganizer: Scott Mahan  \nCo-Organizers: Eric Yeats\, Henry Kvinge\, and Tim Doster \nAnswering questions around the safety\, robustness\, and explainability of AI models is becoming increasingly critical. Mathematics helps us understand AI failure modes and make AI more transparent and reliable. This special session features mathematics research that analyzes and addresses AI assurance concerns\, showcasing areas such as algebraic geometry\, probability theory\, and computational topology\, which provide the insights required for AI systems to meet the needs of real-world applications. \nAMS Special Session on Augmenting\, not Automating: Machine Learning Tools for Mathematical Discovery\, I and II\nDate: January 5\, 2026  \nOrganizer: Helen Jenne \nCo-Organizers: Henry Kvinge and Max Vargas  \nThe last several years have seen an explosion of interest in the application of machine learning for mathematical discovery. This special session will consist of a morning session highlighting recent developments in this area\, followed by an interactive afternoon problem session. The afternoon session will begin with tool demonstrations\, leading into small group exploration of datasets that the organizers will provide using tools discussed earlier in the session. \nAMS Special Session on Mathematical Advances in Mission-Aligned Research\, I and II\nDate: January 7\, 2026 \nOrganizer: Emilie Purvine \nCo-Organizers: Brett Jefferson and Audun Myers \nThe U.S. Government has a wide range of mission priority areas\, including energy\, security\, health\, AI\, cybersecurity\, and more. Moreover\, these priorities come with unique challenges\, including scale\, temporal data\, and the need for transparent solutions. Mathematics\, both theoretical and applied\, plays a role in all these areas. This session will highlight advances in mathematics that have enabled progress in these areas to demonstrate the wide-ranging uses of mathematics. \nPNNL Presentations \nRetraining Emulation: A General Framework for Machine Unlearning\nDate: January 4\, 2026 \nPresenter: Yiran Jia \nAuthors: Eric Yeats and Scott Mahan \nAs large-scale models are increasingly deployed in sensitive domains\, there is a critical need for machine unlearning: the task of selectively removing the influence of a specific data subset from a trained model without the prohibitive cost of full retraining. The central challenge is to ensure complete removal of target information while preserving the model’s overall utility on remaining data. \nDetecting Collateral Damage in Unlearning for Diffusion-Based Image Generation Models\nDate: January 4\, 2026 \nPresenter: Aaron Jacobson \nAuthor: Scott Mahan \nRecent generative AI models have demonstrated remarkable growth in capabilities\, size\, and data requirements. As this technology continues to develop\, privacy and security risks associated with training these models on sensitive data become more common and harder to prevent. Retraining large generative models from scratch without sensitive data is cost-prohibitive; the field of machine unlearning seeks to provide cost- and time-efficient update methods to remove the effects of sensitive data without causing collateral damage to related\, non-sensitive data. To understand the effects of these methods\, we investigate the internal representations of data in large neural networks and diffusion-based generative models. We observe that the unlearning process induces changes in latent-space representations of data; importantly\, the local intrinsic dimension of data manifolds is increased when the corresponding classes of data are unlearned. Using this insight\, we propose a method to identify data that may be subject to collateral damage from unlearning and measure the degree to which they were affected. In the context of diffusion-based image generation\, this method works by sampling non-target images from the normal space of the target data manifold and projecting them to the space of natural images. The result is a collection of natural images that are close to the target data in latent space\, and these results can be captioned to produce a list of classes that may be collaterally damaged by unlearning. \nInvestigating Bijection Discovery with LLMs: A Case Study Using Catalan Objects\nDate: January 5\, 2026 \nPresenting Author: Helen Jenne \nRecent breakthroughs have highlighted the potential of large language models (LLMs) to advance mathematics by combining program synthesis with evolutionary search. Systems such as FunSearch seem to be particularly effective for combinatorial optimization problems\, such as the cap set problem\, where it is straightforward to verify a proposed solution. \nWe ask whether similar approaches can address the more creative challenge of bijection discovery. Finding bijections requires mathematical intuition and deep familiarity with the combinatorial objects of interest\, suggesting an opportunity for LLM-based systems that combine broad prior knowledge\, code-writing proficiency\, and the ability to do computational exploration at a scale far beyond what is possible for humans. We investigate this question using objects counted by the Catalan numbers\, with a pipeline based on OpenEvolve (an open-source analog of AlphaEvolve). In this talk\, we present our framework and progress\, and share key lessons learned. We will also give a brief overview of the AI-for-combinatorics efforts at Pacific Northwest National Laboratory\, highlighting the Algebraic Combinatorics Dataset Repository—a collection of datasets representing foundational results and open problems in algebraic combinatorics. This talk represents joint work with Davis Brown\, Herman Chau\, Jesse He\, Max Vargas\, Sara Billey\, Mark Raugas\, and Henry Kvinge. \nInformation Theory in a Variety of Contexts\nDate: January 5\, 2026 \nPresenting Author: William Kay \nI am a research mathematician who went from academia to a Federally Funded Research and Development Center. In this talk\, I will discuss how my background in information theory found a variety of applications across domain sciences. No background on information theory or applied science is necessary for the audience. \nModel Editing and Machine Unlearning for Mission Priorities\nDate: January 7\, 2026 \nPresenting Author: Scott Mahan \nAuthors: Henry Kvinge\, Tim Doster\, Eric Yeats\, Darryl Hannan\, Yiran Jia\, Aaron Jacobson\, and Wilson Fearn \nGenerative AI models are advancing rapidly and showcasing increasingly sophisticated capabilities\, making them useful in many U.S. government mission priorities. However\, the massive size of training datasets increases the likelihood of exposure to undesirable or flawed data\, potentially resulting in unwanted downstream model behaviors. Model editing and machine unlearning offer effective mechanisms for AI alignment\, enabling the modification of factual associations or the removal of problematic information from generative models. In this work\, we present mathematical innovations that enhance the effectiveness of model alignment techniques. Moreover\, we introduce methods to predict unexpected failures in aligned generative AI systems and propose strategies for mitigating these risks. To demonstrate the real-world impact of these advances\, we explore applications in areas critical to mission priorities\, such as cybersecurity\, where enhanced AI capabilities can strengthen defense mechanisms\, detect threats\, and safeguard sensitive data. \nEvaluation of AI Systems Beyond Accuracy and Leaderboards\nDate: January 7\, 2026 \nPresenting Author: Helen Jenne \nAuthors: Robert Jasper\, Henry Kvinge\, Sarah McGuire\, Grace O’Brien\, and Andrew Aguilar \nRecent years have seen dramatic advances in the capabilities of AI systems\, but our methods for ensuring these systems work correctly haven’t kept pace. Even before the explosion in use of generative AI\, we had limited understanding of failure modes\, performance nuances\, and unexpected behaviors. With generative models\, we face an even more fundamental challenge: it is difficult to specify what it means for a model to be correct. In this talk\, we will give an overview of current evaluation challenges and approaches and present frameworks we’ve developed to address these complex challenges. This represents joint work with Andrew Aguilar\, Robert Jasper\, Henry Kvinge\, Grace O’Brien\, and Sarah McGuire Scullen. \nA Topological View of Cyber Networks\nDate: January 7\, 2026 \nPresenting Author: Emilie Purvine \nCyber networks are incredibly complex systems. To study their operation and understand their current state\, we must be able to analyze the many data streams that are captured by logging services. These include (but are definitely not limited to!) network flow\, host\, process\, and authentication logs. There has been significant work in analyzing these data using graph models\, machine learning\, and natural language processing (NLP)-inspired methods. While these approaches have shown significant value\, they also have some drawbacks. Graph models may miss some of the complex interactions present between network entities like hosts\, users\, processes\, and protocols that show up in the log metadata. Machine learning can infer and extrapolate from very complex patterns\, but its reasoning can be difficult to communicate to an analyst user. And while NLP is very good at analyzing sequences of tokens\, such as those in cyber logs\, they may not take advantage of the structure in the logs themselves. To address some of these drawbacks\, I will provide examples where hypergraphs and a topological perspective have been able to derive valuable insight and situational awareness for cyber networks. Hypergraphs can capture the kinds of multi-way relationships among behaviors within cyber networks\, and topology can capture high-order structural properties that graph methods cannot. The results from hypergraph and topological analytics can be more interpretable for analysts as a consequence. \nComparison of Binaries Using Sequence Alignment\nDate: January 7\, 2026 \nPresenting Author: Brett Jefferson \nAuthor: Stephen Young \nComparing binary files of software (or firmware) has inherent challenges: register allocations can change\, reordering of code components due to optimizers and different compilers can change the binaries. This talk will present one method of comparison that borrows techniques from biology to measure the change between versioned binaries. \n  \nCareers at PNNL \nAs a national laboratory that conducts an abundance of research using advanced mathematics\, we are always searching for talented individuals looking to be a part of our mission. \n\n\n\n\n\nCareers at PNNL
URL:https://nwquantum.uw.edu/event/joint-mathematics-meetings-2026/
LOCATION:Washington\, Washington\, DC\, United States
CATEGORIES:Conferences
ATTACH;FMTTYPE=image/jpeg:https://nwquantum.uw.edu/wp-content/uploads/2025/12/NSD_2700_EVENT_JMM2026_WebHero.jpg
END:VEVENT
END:VCALENDAR