The code base had multiple problems, none of which I would blame on category theory or Cats.
- Engineers had written higher abstractions seemingly just because they could. When I audited how internal libraries were used across our services, calling applications weren't making use of the advanced abstractions. I'm talking about things like using Cats to abstract across AWS S3 error handling. Cool, except that it wasn't needed because we never actually encountered the exotic compositions of failures anticipated by the libraries.
- The abstractions written for our own business logic were worse than abstracting over S3. They were premature. Our business logic had to change frequently because the end user experience was still evolving rapidly. Changes that violated previous assumptions and their corresponding abstractions took longer than they should have and/or led to very awkward code.
- At least at the time, tooling had more problems with the "advanced" code. The IntelliJ IDEA Scala plugin could not yet show how implicits were used. It couldn't find senders sending to an Actor the way it can easily find plain callers of a function. You would need to manually force a "clean" in certain modules before code changes would compile as expected. IDEs would also fail to flag code that couldn't compile, and incorrectly flag code that would compile, at a higher rate compared to plainer Scala.
I'm still glad that I have access to Cats, Akka, and other advanced parts of the Scala ecosystem. They're still used in a few places where their value is greater than their cost. Even in the plain code, I'm still very glad I have pattern matching, immutability-by-default, rich collections, map, flatMap, filter, fold, scan, find, etc. I have no plans to transition our company off Scala internally. If I were starting a greenfield project with myself as the sole developer, I'd probably be using Scala for that too. But I prefer to write a bunch of simple repetitive code first, then develop abstractions after it's clear what the commonalities are.
- Engineers had written higher abstractions seemingly just because they could. When I audited how internal libraries were used across our services, calling applications weren't making use of the advanced abstractions. I'm talking about things like using Cats to abstract across AWS S3 error handling. Cool, except that it wasn't needed because we never actually encountered the exotic compositions of failures anticipated by the libraries.
- The abstractions written for our own business logic were worse than abstracting over S3. They were premature. Our business logic had to change frequently because the end user experience was still evolving rapidly. Changes that violated previous assumptions and their corresponding abstractions took longer than they should have and/or led to very awkward code.
- At least at the time, tooling had more problems with the "advanced" code. The IntelliJ IDEA Scala plugin could not yet show how implicits were used. It couldn't find senders sending to an Actor the way it can easily find plain callers of a function. You would need to manually force a "clean" in certain modules before code changes would compile as expected. IDEs would also fail to flag code that couldn't compile, and incorrectly flag code that would compile, at a higher rate compared to plainer Scala.
I'm still glad that I have access to Cats, Akka, and other advanced parts of the Scala ecosystem. They're still used in a few places where their value is greater than their cost. Even in the plain code, I'm still very glad I have pattern matching, immutability-by-default, rich collections, map, flatMap, filter, fold, scan, find, etc. I have no plans to transition our company off Scala internally. If I were starting a greenfield project with myself as the sole developer, I'd probably be using Scala for that too. But I prefer to write a bunch of simple repetitive code first, then develop abstractions after it's clear what the commonalities are.