Comprehensive AI Services team: the story

In this document we look at the inception of the project, how it evolved, what came of it. Salient features of this were a large, 5-6 person team and an open-ended, wide-scoped topic.

Our rough plan at the start was to

  • look at examples, in order to get surface area on the problem,
  • build a model based on the surface area we got,
  • get a problem list based on the surface area we got,
  • solve one technical problem using the model (e.g., “how agency could arise in CAIS”).

While working on the project, it became apparent that the study of AI service systems is in a very pre-paradigmatic state, and that we need to build up more “methodological” understanding of the topic to get progress. For example, it is unclear what is meant by a “model” for CAIS: Do we mean a mental tool from which one could bootstrap their intuitions? Or a computational model which could be used to make predictions about a particular aspect of reality? Or a fundamental model in which the whole system could be faithfully described, but that would necessarily be too unwieldly to use in practice?

As a result, we ended up reading a series of blogposts on the nature of modelling in mathematics and computer science and developed ideas on how modelling relates to coming up with examples, creating agendas and problems lists, and finally solving technical problems. In essence, it seems that all these tasks “feed into each other”, in the sense that one is unlikely to come up with a good model for studying a technical problem unless he already knows which problem he wants to address, and having a better of model allows one to better understand which technical problems need solving.

With such a large topic, many connections were found with other fields of study – e.g. Operating Systems, Complex Networks, Multi-Agent Systems, Service Ecosystem Modelling – and adjacent ideas in the space of AI Alignment. We expect to have a more or less complete description of how these fields could in principle inform the study of AI service ecosystems. 

We gained vast amounts of surface area at different points in the landscape and found two high-level ways of classifying our explorations: 

Modelling-oriented research included descriptive foundational thinking about axiomatic properties of services and their connections, the network properties of such systems, exploring Process Algebras as a modelling paradigm for intra and inter-service dynamics; and more perscriptive design thinking about the modelling of phenomena and components like access control, or human-system interface.

Problem-oriented research that ended up bearing fruits at different levels of depth. An example would be numerous mappings of existing AI safety problems into a service ecosystem space; or considerations about emergent agency; human and institutional value-drift, the transition between our current world and a world of pervasive heavily automated intelligent services.

The main benefit of this project, to us, has been the gain of a large amount of “surface area”, which should make us more effective when investigating related questions in the future. Apart from that, we have identified several (~5) promising topics that deserve further attention, but that would each make up for an individual project — we aim to publish their list later. Finally, we also come up with several (~10) more-standalone insights that deserve being written up separately, and might be suited for a blogpost format. Accounting for opportunity costs and real-life interferences, we believe that we will end up fleshing out roughly three of these ideas. Currently, it seems promising to us to write:

  1. A post about ways in which seemingly short-term AI issues could have important interactions with long-term AI issues and existential risks.
  2. An introductory text about systems of AI services that would describe the key concepts, introduce mental tools useful for investigating this topic, give references for relevant fields of study and existing results, and suggest questions for future research.
  3. Operationalization of the concept of “agency”, as opposed to “tools”, in terms of how difficult is it to describe the given object from the point of view of Dennett’s three stances.


Apart from this, we learned several lessons related to working on projects of this type:

  • Ownership: It’s very useful having someone “own” a thing even if many are participating
    • Not a lot of work
  • Exploratory and brainstorming sessions are fine with a bigger number of people. However, when trying to attack a specific problems, having more than 3 people in a conversation was decreasing our effectiveness noticeably.
  • With 5-6 members in the team, we spent a lot of energy on keeping each other updated on what is happening and on communicating our ideas.
  • Making it the point to have actionable next actions and having people commit to them before adjourning a meeting.
  • If stuck, it is fine to trust your intuitions on what is interesting and shift to something else.