Summary
Who am I, and Why am I Here? (apologies to the loved ones of the late Adm. James Stockdale)
I am a 77-year-old grandmother. Since 1972 I have been a forecaster of geopolitical events. I help the public, governmental bodies, corporations, and intelligence entities plan for the future. Like any grandmother, I care about my descendents' futures. I believe that unaligned AGI is the greatest existential threat facing us.
That's why I appreciate your AI Safety Fundamentals course. Thank you, all!
Hybrid Human/Many-Federated-LLMs Systems could play a role in AI Alignment
My experience related to this class (albeit small for now) of systems:
I'm pretty good at forecasting. I was rated as the third best forecaster of 2023 at INFER, a hybrid human/AI system in support of the US intelligence community. I have been successful at forecasting many INFER questions relevant to AI Alignment.
In 2020 I was rated in the top ten forecasters of the Covid-19 epidemic in an IARPA FOCUS experiment, including conditionals and counterfactuals.
In 2019, my colleague Christopher Karvetski and I prototyped and tested over forty forecasting systems, many hybrid, in IARPA's 2019 Geopolitical Forecasting Competition 2. Our results are published here. Our key new finding is that, on top of any or all other techniques for optimizing crowdsourced forecasts, the semantic structures of rationales written by humans can give a big boost to accuracy.
Results: The two most important factors are integrative complexity and use of historical data either directly relevant (time series) or via comparisons.
See "Figure 3" below, calculated via a random forest algorithm..
Figure above is from What do forecasting rationales reveal about thinking patterns of top geopolitical forecasters?
My forecasting successes rely upon skills as a sythesist of the disciplines of computer security, forecasting, and industrial quality control. I am both a practitioner and researcher in these fields, with a hands-on history of designing, coding, prototyping, building and installing software and hardware. My work history and research publications: some top examples here, longer list here, the too boring, nearly complete list here.
Here's why I believe that our project, work to date documented in the following pages, points the way to combating all levels of AI threats, including apparently inevitable and existentially threatening AGI systems.
The history of warfare shows that it is better to anticipate the enemy's moves rather than being caught unprepared. Hence today's intelligence agencies. In recent years both DARPA and IARPA, which serve the US intelligence community, among other sponsors, have been researching topics of central importance to the AGI threat. The prime advocate of this research is Jason Matheny, now CEO of RAND, previously the head of IARPA, where he had a voice at President Obama's National Security Council. At IARPA he spearheaded research projects on crowdsourced forecasting, including hybrid systems. In his leadership of RAND, he now is sponsoring INFER, which is running a hybrid human/ANTHROP/C forecasting system. See its currently active questions related to AI Alignment here. ANTHROP/C inputs are under each question's Crowd Forecast tab.
Granted, I'm appealing in part to authority. Our experiments so far have been published, subjected to peer review, and not shown to be wrong. So we may be on the right track. On the other hand, we are aware that even peer reviewed results in the world's top journals can turn out to be non-replicable. Indeed, I have participated in replicability experiments since the first one, Camerer and Dreber in 2016, and continuing through all the RepliCATS experiments, most recently its ongoing SMART preprints experiment. So I am fully aware that it is crucial to be open to falsifying even our seemingly best results, and being open to new lines of research.
That's why I don't believe that the research I'm proposing will be enough to thwart the existential risks of AGI. More approaches needed!All hands onboard!! For example, you folks with AI Safety Fundamentals are playing a crucial AI Alignment role by seeding the field with your graduates. Again, thank you all!
And surely there's more to bring against the AGI existential threat, including efforts not yet begun, perhaps not even in the idea stage. Indeed. to encourage more inputs to solving the AGI threat, my team's BestWorld may well be important. We support any technologies that make it easier to discover what is true, while ensuring these truths will be believable and beneficial. Free, open and believable discussions about how to counter AGI dangers will be essential. The human race needs help sorting out fact from fiction so as to resist the forces that can lead to the extinction of our species-- and indeed, many other species. It's already being called the Anthropocene Mass Extinction. We hope that our approach will contribute to this process of discussion leading to implementations of countermeasures.
Please continue with accomplishments so far with this AI alignment project --->
© 2024 Carolyn Meinel. All rights reserved.