AI Administrators
The first thing we do, let’s automate away all the deans.
Readers, we need to talk about AI for science. AI scientists and self-driving labs are certainly hot areas in 2025, but they’re missing a critical point. The core mechanical activities of science like reading the literature, doing lab and field work, and writing up results have all repeatedly been improved by the arrival of better tools. Using AI as one more tool for this is so incremental it hurts. Meanwhile, most overhead work that scientists do boils down to producing words for consumption by administrators. Those administrators then turn the scientists’ words into other words for consumption by other administrators in a bureaucratic human chain that, for federally funded work, literally connects all the way to the President of the United States. LLMs are word machines. The first thing we do, let’s automate away all the deans.
The more we speed ourselves up with new tools, the higher the opportunity cost of each piece of overhead work we need to perform. If we can substantially accelerate our work with AI, every non-technical word in a grant application, every expense report, and every committee meeting will destroy substantially more value than it did before. A key here is to realize that much of this overhead work exists only because the administration layer is human. Take meetings, for example. Meetings, especially meetings in academia, are as much (or even more) social events and status displays as they are times for decision making and communication. Shrink the number of administrators, and you can shrink the number of meetings. This administration layer also reifies legacy processes into an organizational structure that locks us into old ways of doing things. Change tends to ruffle feathers; fewer peacocks means fewer feathers.
The last major advance we had for our overhead work was…google docs? That’s depressing. We do not want to be living examples of Baumol’s cost disease. We should be experimenting all over the place with how to do science faster and with nearly no overhead. A scientist who manages AI agents has an overall productivity equal to (the total productivity of all agents they manage)*(1 - the fraction of their time they spend on overhead work). We’re thinking a lot about the first number and ignoring the second. There’s a future out there where the overhead work is done by computers and most of the university, corporate R&D center, or national lab isn’t necessary anymore. We would like to live in that future as soon as possible.
If you want to start now, here are some broad categories of science administration tasks that could be streamlined or perhaps even fully performed by large language models (LLMs) in the near term along with initial actions you might take to start automating those tasks.
Translating scientific information for specialized non-technical audiences. Many science administrators are coordinators, working inside an institution with other departments like finance or human resources or externally with sponsors and other stakeholders. Their audiences each need different information but are not native speakers of science. These administrators thus spend large amounts of time creating specialized translations from the language of science into the languages of fields like corporate finance, project management, or government relations. LLMs can reliably produce these translations at scale. Talk to the relevant departments, ask for examples of what they get from the go-betweens, and then have an LLM generate those documents for them directly.
Tracking, auditing and compliance. Complying with layers of policy often requires that the money that scientists spend and the work that they do be meticulously tracked and reported. Scientists need the results of this tracking so that they do not overspend or perform unauthorized work. These requirements also frequently change, requiring ongoing updates to the underlying forms and software tools. LLMs or browser agents can take on much of this work– creating easy-to-use input forms (or removing the need for them all together), translating that data into multiple formats, generating reports, and distributing information. Have an LLM summarize your funding contract for you so you know what you can and can’t spend on. Try having a browser agent do your expense reports.
Prioritizing projects and individuals for funding. Science administrators often prioritize which projects receive funding. Some of this is a matter of expert taste, which should not be automated. However, LLMs should be able to make proposals easier to understand and raise flags around bad assumptions, gaps in plans, and work that has already been done by others. In our day jobs evaluating proposals from scientists to our respective fellowship programs, we use taste heavily but inform that taste with advice from LLMs. When competing for internal funding at your institution, attach an LLM-created summary that you approve to the front of your proposal. Over time see how much of the proposal was actually necessary.
Large proposal writing. Science administrators are usually the ones tasked with writing or revising large portions of multi-institution proposals despite not necessarily having expertise in the underlying field. LLMs can augment their knowledge and improve the clarity of their proposals, while also reducing the number of requests they make to scientists for technical clarifications or rewrites. This one is a little further in the future, but what if you wrote your next multi-PI proposal in something like github with the equivalent of a coding agent proposing and merging changes, especially to provide required boilerplate. It might be that you wouldn’t need administrative support at all.
Tell us about your experiments automating your overhead work!


