Activate 1.6 is a quick release that features an integration with the finagle-mysql non-blocking driver and some performance enhancements. Example code:

import net.fwbrasil.activate.ActivateContext
import com.twitter.finagle.exp.MysqlClient
import com.twitter.finagle.exp.MysqlStackClient
import com.twitter.finagle.client.DefaultPool
import com.twitter.util.TimeConversions.longToTimeableNumber
import net.fwbrasil.activate.storage.relational.async.FinagleMySQLStorage

object asyncFinagleMySQLContext extends ActivateContext {
    lazy val storage = new FinagleMySQLStorage {

        val dbPoolConfig = DefaultPool.Param(
            low = 10,
            high = 50,
            bufferSize = 0,
            idleTime = 100 millis,
            maxWaiters = 200)

         val client =
            new MysqlClient(MysqlStackClient.configured(dbPoolConfig))
                .withCredentials("user", "pass")
Please report if you find issues using this version.

This version features support for custom ids, better play framework integration and some classes to ease writing tests with Activate.

Special thanks to Valentin ChuravyPaweł RzeczewskiAnton Khodakivskiy and Janos Haber for the valuable contributions.

Change log:

custom ids
mysql async
reads without transaction
new play integration
test support
deferred read validation
query collection operators (in/notIn)
custom classpath scan
persisted index
zip encoder
bug fixes

Activate 1.4.1 is available.

The main new feature is the PrevalentStorage, that uses memory mapped files to maintain a prevalent storage and provides a high scalable persistence solution. Another important new feature is the support for Slick 2 lifted embedding queries.

Special thanks to Paweł Rzeczewski for the help with the Lift integration and to Matias Rodriguez for the PrevalentStorage idea.

Release notes (with links to the documentation):

- Prevalent storage
- Slick 2 lifted embedding support
- Sql Server dialect
- Custom type encoders
- Lift Framewok integration
- Update to Play 2.2
- Custom cache configuration
- Eager queries
- Cached queries
- Dynamic queries
- Memory index
- Lifecycle listeners
- Property listeners
- EntityMap
- Cascade delete
- Kryo serializer
- Modify column type migration
- Queries with empty where

Activate 1.3 is available at maven central.

This version features the reactive persistence. It is possible to perform fully non-blocking async transactions and queries using PostgreSQL and MongoDB.
There is a reactive play example application.

Special thanks to Mauricio for the help with postgresql-async and to b0c1 for the json related work.

Release notes (with links to the documentation):

- Async transactions
- Async queries
- Reactive PostgreSQL storage (using postgresql-async)
- Reactive MongoDB storage (using reactivemongo)
Slick direct embedding queries
- Spray json integration
- Lazy entity lists
- Limited queries offset
- toUpperCase/toLowerCase query functions
- Bug fixes

Activate 1.2 is available at maven central.

The main feature of this version is the Polyglot Persistence. For example, an entity from a relational database can have a property that is an entity from a non-relational database. Transactions are distributed using the two-phase commit protocol.

Another important feature is the optimistic offline locking. Now it is possible to use Activate as a Distribute STM without the need to use the Coordinator central service.

Release notes (with links to the documentation):

- Scala 2.10
Polyglot persistence
Optimistic offline locking
Play 2.1 support
DB2 jdbc dialect
Limited queries
Better transient field support
Custom serializers
Nested list order preservation
Bug fixes

Activate 1.0 was launched, people were talking about it and for my pleasant surprise Jonas Bonér, the Typesafe CTO, posted about it on twitter:

@jboner Activate Framework – durable STM with pluggable persistence: http://activate-framework.org/ #scala

Really nice! But then there is another post:

@jboner For the record: I don’t believe in durable STMs. We tried that in Akka a couple of years ago. Dropped it for a reason. Use Slick instead.

While I fully respect Jonas Bonér and the Akka team, I disagree with this. I had already read about the reasons why the Akka Persistence module was discontinued, and it does not look like an STM problem, but rather design pitfalls in this specific Akka module. This is the document produced by the Akka Team:


The analysis focuses on two main points:

1. It is not possible to guarantee the Durable STM consistency due the absence of ACID transactions on NoSQL databases. (“No failure atomicity” and “No consistency”)

2. The Akka Persistence STM is not distributed, so it is not possible to use it with multiple virtual machines. (“No isolation” and “Lost updates”)


The second point is about an absent Akka Persistence feature, so it is not possible to do a study on it. Activate addresses this issue by providing a Coordinator to make the Durable STM distributed. More information here and here.

The first point is about problems people were running into when using Akka Persistence. So it is possible to do a study on what went wrong. To start, we should locate the old source code. The module was moved to the also discontinued akka-modules repository. This is the source code (v1.0 tag):




Since Activate achieves a high level of consistency even when used with MongoDB (non ACID), perhaps Akka Persistence was doing something different than Activate that could produce inconsistencies.

To find this difference, we can implement a very simple atomic storage:


This storage supports only refs and uses a synchronized map of atomic integers to store them. Placement and retrieval of items in this storage are fully atomic. Since Akka Persistence’s problem was due to the lack of atomicity at NoSQL databases, this atomic storage should have no problems in the following test case.

Test case

This test creates a ref with initial value 0 and run 50 threads in parallel, each one incrementing the value by 1 with a STM transaction. The ref’s final value should be 50:


Values are verified to be not equal to the expected value (50). The console output:


The akkaCurrentValue and databaseCurrentValue are the same, but differ from the expected value. It varies on each test execution. The multiverseCurrentValue is always zero.

Why does Akka Persistence produce inconsistencies, even using an atomic storage?

The commit flow problem

Akka Persistence uses the Multiverse STM. It listens to the transaction events:


    mtx.registerLifecycleListener(new TransactionLifecycleListener() {
      def notify(mtx: MultiverseTransaction, event: TransactionLifecycleEvent) = event match {
        case TransactionLifecycleEvent.PostCommit => tx.commitJta
        case TransactionLifecycleEvent.PreCommit => tx.commitPersistentState
        case TransactionLifecycleEvent.PostAbort => tx.abort
        case _ => {}

The persistent commit occurs when a PreCommit event is fired by Multiverse. Looking at its implementation we can see where this event is fired:


public final void commit() {
          case Active:

public final void prepare() {
  case Active:
               try {

Before these lines there are only console log actions, so we conclude the PreCommit event is fired as the first thing on a transaction commit. This is the basic flow of a STM transaction commit:

1. Lock each transactional unit (refs) used by the transaction;
2. Validate all reads and writes, throw an exception to retry the transaction in case of inconsistency;
3. Commit the write operations values on the transactional units (refs) in memory;
4. Release locks.

As the PreCommit event occurs before this workflow, the Akka Persistence was storing values that could be invalid. If the transaction retries, the invalid value is already placed on the storage. This looks like a design error. It would be more reasonable to do it:

- After validating the transaction (2), as the storage should receive only valid items.
- Before releasing the locks (4). Otherwise, it is not possible to ensure that placement of items in the storage occurs in the same order as STM transaction commits.

Activate uses a different commit flow in order to solve this problem:

1. Lock each transactional unit (refs) used by the transaction;
2. Validate all reads and writes, throw an exception to retry the transaction if there is inconsistency;
3. Place items in the storage;
4. Commit the write operations values on the transactional units (refs) in memory;
5. Release locks.

The direct access problem

It is possible to see a Durable STM as a way to represent in memory the data that is in the storage. Each data representation can be called as a STM transactional unit. The data representation must obey two important restrictions:

1. Each data representation must be unique in-memory. If there is more than one transactional unit representing the same data, the STM logic to validate transactions can not detect a write/read conflict on the data.

2. All access to the data must be done through a transactional unit. If there is a direct read or write to the storage, again the STM validation logic can not detect reads and writes conflicts.

The first restriction was broken in the Akka Persistence 1.0-M1. The problem was solved by the StorageManager class in 1.0.

The second restriction remains broken. The Persistent* classes access the storage directly when reading values. The read is done through the transaction unit only after a write. The PersistentRef for example:


def get: Option[T] = if (ref.isDefined) ref.opt else storage.getRefStorageFor(uuid)

Ref is defined only after a write. All reads before the write are not tracked by the STM concurrency control. This is another Akka Persistence design problem.

Activate obeys these two restrictions. It uses soft references to guarantee that there is only one transactional unit for each storage data. It also guarantees that all reads and writes are made through a transactional unit. The database is only read in order to initialize the transactional unit (entity lazy initialization), or whenever the entity is reloaded by the Coordinator in case it’s been modified at another virtual machine.


The Akka Persistence inconsistency problems would be present even with full ACID storages. The same concurrency test that fails using Akka Persistence with the implemented atomic storage, passes using Activate with the non atomic storage MongoDB:


Activate adds a very high consistency level on top of Mongo, but eventually (particularly when using database eventual consistency with the coordinator) there should be inconsistencies like stale reads. It is an expected limitation of any non ACID database, Activate just minimizes it.

If your application can not have rare inconsistencies, just use Activate with ACID storages so you can have full consistency support. You can choose which modules assemble in your application. The available modules for 1.1 are Jdbc, Mongo and Prevayler. In 1.2 we will have a module to support graph databases too.

This autopsy evidences that Akka Persistence has two problems that make its usage unfeasible with atomic or non atomic storages. Even if the arguments presented in the Akka Team document were completely right, they should be a reason to avoid Durable STM with NoSQL databases, not to discredit Durable STMs at all.

Activate Persistence Framework v1.1 is available. The main new feature is the Coordinator, that transforms Activate into a Distributed STM. There is a new architecture documentation too.

Release notes:

- Play application hot reload support (ActivatePlayPlugin)
- New query syntax option: select[MyEntity] where(_.name :== “a”)
- New databases: h2, derby and hsqldb.
- Support for list fields in entities: class MyEntity(var intList: List[Int]) extends Entity
- @Alias annotation to customize entity and field names at the storage
- Manual migrations
- Multiple VMs support (Coordinator)
- Performance enhancements
- Bug fixes