You should take a look at Microsoft's Durable Functions which pairs event sourcing + (optional) actor model + serverless. It's some pretty neat tech.

I tried doing something similar to this several years ago, and here's a few issues I ran into:

1. Pub/sub in Event Sourcing is a bad idea. It's really hard to get right. (what to do if sub happens after pub due to scaling issues/infrastructure, etc?) Instead it's better to push commands deliberately to a process manager that handles the inter-domain communication and orchestration.

2. Concurrency. Ensuring aggregates are essentially single-threaded entities is a must. Having the same aggregate id running in multiple places can cause some really fun bugs. This usually requires a distributed lock of some sort.

3. Error handling. I ended up never sending a command to a domain directly, instead I sent it to a process manager that could handle all the potential failure cases.

For anyone interested in event sourcing with the actor model I've built an open source Elixir library called Commanded (https://github.com/commanded/commanded) which takes advantage of Erlang's BEAM VM to host aggregate processes. There's also an event store implemented in Elixir which uses Postgres for storage (https://github.com/commanded/eventstore).

The actor model provides the guarantee that requests to a single instance are processed serially, while requests to different instances can be processed concurrently. Distributed Erlang allows these instances to be scaled out amongst a cluster of nodes with transparent routing of commands to the instance, regardless of which connected node it is running on.

In Elixir and Erlang, the OTP platform provides the building blocks to host an aggregate instance as a process (as a `GenServer`). Following the "functional core, imperative shell" style I model the domain code as pure functions with the host process taking care of any IO, such as reading and appending the aggregate's events.