Posts Tagged ‘architecture’

arquitecturas escalables e invisibles

June 14th, 2016 No comments

AWS Lambda, Azure Functions, Google Cloud Functions, OpenWhisk from IBM … el mundo “serverless” se nos viene encima con todo el furor de un hype en celo … la opción de ejecutar código sin provisionar nada, meras “funciones” expuestas en internet. ¿Va a seguir el servidor de aplicaciones el mismo rumbo a la obsolescencia que el famoso mando intermedio? ¿Irá detrás el servidor web? No en vano el technology radar de abril del 2016 de ThoughtWorks dice que precaución con el servidor de aplicaciones (“hold“, como lo clasifican ellos) como pieza clave del ecosistema. Como siempre, depende y las cosas y las organizaciones varían mucho en su adopción o su pereza (los “laggards”). Pensemos también, por otra parte, en la extensión de las beneficiosas consecuencias – a priori – para temas de seguridad, patching, gestión, etc (¿malware si no hay servidor?). Otra vuelta de tuerca en la idea de correr cosas virtuales sobre cosas virtuales sobre cosas virtuales (“it´s virtual machines all the way down”).

A pesar de que la idea es vendida como si se pudieran simplemente “colgar” funciones independientes en la nube – ¿y en realidad no deja de ser un servicio web justo eso? – al final los proveedores lo que hacen es envolver esas funciones en una API gateway y exponerlas para que los clientes las llamen mediante el popular REST. Esto se apoya en que tiene que haber un entorno de ejecución, un contenedor con la máquina virtual (me refiero a la maquina virtual de Java por ejemplo, no a una maquina virtual en su otra concepción). La ventaja es tener ahora algo más ligero que un OS para poder correr una VM para poder correr una lambda, optando por apoyarse en un contenedor que tiene lo justo y necesario, o corriendo directamente sobre el hypervisor.

Citando de la pagina de AWS Lambda “the core components are Lambda functions and event sources. An event source is the AWS service or custom application that publishes events, and a Lambda function is the custom code that processes the events.” Esto me sugiere arquitecturas reactivas, observables, incluso eliminar ese viejo ESB… escenarios de integración y transformación de datos. Sugiere también sistemas difíciles de depurar, difíciles de entender y de razonar sobre ellos, como contrapartida. Sugiere programación funcional y construir sistemas enteros en este paradigma, combinándolo con un uso juicioso de CDNs. Desde luego, potencialmente puede significar muchos cambios a nivel de arquitectura de soluciones. Hay gente construyendo sistemas enteros con piezas (¿de lego?) como Auth0, Firebase, API Gateway, Lambda, SQS, S3, CloudSearch, Elastic Transcoder, etc.

robust profits

Evidentemente, no es que los servidores realmente desparezcan, sino que más bien desaparecen como preocupacion del desarrollador: problemas como despliegues, escalado, configuracion e incluso el propio sistema operativo se nublan en el olvido, mientras nos centramos en servicios y plataformas elasticas de computacion. La teoria dice que ahora es aun más facil construir sistemas complejos que pueden crecer, escalar, evolucionar a medida – el clásico sales pitch, que ya sabemos que no es tan sencillo ni tan bonito todo.

Para saber más:

Comparativa entre AWS Lambda y Azure Functions

ThoughtWorks sobre Serverless Architecture

What is serverless

entrada sobre Serverless en CloudAcademy

Speed of understanding

December 4th, 2015 No comments

Is your code easy to understand, can someone new to the codebase grasp quickly most of the intent of the code? What is the speed of understanding of your code.While this is basically the old idea that code should be readable by humans – that includes business people and managers 😉 -, what strategies and tactics can you put in place to gain speed of understanding as an intrinsic quality in your codebase?The thing is that very rarely someone who actually wrote the code is the one that will also maintain for the entire lifetime of the software (and, well, even in that case, knowledge about a codebase is one of the most ephemeral things in life, especially in the case of significant – read, large – codebases).

When a codebase is not so well organized, it very quickly becomes so confusing that even the original team members find themselves scrambling and frantically searching through the codebase in order to find that snippet they wanted to reuse, that pesky function the name of which they can’t recall ( for some reason, some pieces of code or some function names are much harder to commit to memory and find quickly. Treat that as a warning that some action is needed there!).I assume that we are naturally working in a OOP environment, which probably lends itself best, although of course functional languages and strange beasts like javascript can do the trick as well.Two of the strategies (or tactics maybe) that help the most are:


  • Use of DDD, a well codified plain-object domain that models / reflects your business, which is how DDD goes about creating a shared vocabulary between business and engineering. That will lead you to having a DSL for the business domain.
  • Write the client code first – the client code that you would like to have to write. This scenario is greatly enabled by having some DSL first, and, of course, hindered by not having that DSL in place. This is similar to using TDD in what some call “outside in” TDD.


These are for me strategies, more that mere tactics. Both are supported of course by TDD, as you can actually use TDD as a “discovery” too for that client code you want to be writing. However, I am willing to bet that if you have non-trivial business knowledge about your domain already, you can skip that process of discovery and write the code you want to see (as in “be the change you want to see”, you know ), without that meaning “do not back up your code with tests.”  That being said,  it is easier said than done of course.


governance frameworks


Now, there are tactics as well. Nothing trailblazing here:


  • Excellent naming conventions and the discipline to uphold them. It’s already a distant past when one had to save precious bytes by being skimpy on names. Not anymore. Probably only in the case of lambdas, pattern matching, etc. should you be getting away with names like, x, _, temp, and so on.
  • Good commenting habits (there is a lot on this in that great book – Code Complete)
  • Refactoring to small single-purpose functions.
  • Do not overdo the “look ma how smart I am”. Sometimes, they say write code that is not “so smart” (clever one-liners just because I can, nested ternary operators, complicated lambdas), and while that is true to the extent that you don’t want to write production code like this or this, that does not mean you do not take advantage of (relatively) “advanced” features in your language, for fear that other developers coming in later are not familiar with those. They might never learn those!
  • Consistent style in all things: naming, spaces, indenting.
  • Share the code. Have other developers from other teams have a look. Even ask a lay person, your boss for example, to guess what a piece of code does.


You need to use the strategies underpinned by the tactics to achieve that speed of understanding.Would there be a way to actually measure that quality of speed of understanding? I am not that smart to figure that out. Probably not in an absolute way, as there are different styles and preferences for programming – some like fluent api style, others call it the train wreck, for example.For the time being maybe, it’s enough that you always keep that purpose in the back of your mind when coding, until it becomes a habit.
Let me have this as one purpose of mindfulness in coding from now on. Let’s be more aware of this when designing and coding.

Categories: coding, rants Tags: , ,

More stuff on microservices, this time for the .net world

July 15th, 2015 No comments

What factors are important for microservices? (a.k.a. fine-grained SOA). It’s said that Node.js is an excellent fit for a microservices approach – which lends itself very well to the fashionable container approach – as it has the following:

  • Excellent package system (npm)
  • Minimal effort to publish new packages (package.json)
  • Node.js itself encourages lightweight, narrow-focus services, not big monolith-ish services
  • Designed to be async I/O in order to achieve better scalability
  • End-to-end javascript stack

This is all very true, and we can add ease of “containerization” of those services.  What about those of us who work in the .net world? For years, the perception that the .net world was somewhat behind in certain aspects, such as DDD adoption (or the blissful ignorance of it), microservices, containers (which are coming soon in windows) and so on. However, I think the tooling is improving a lot this year and sweeping changes are coming to the platform with the advent of .net core, the new runtimes, etc. Well, we can have all of this, although we might have to make more of a conscious effort to steer our development practices and inertia towards a more similar approach.

  • We have a good package system, nuget, and we also have chocolatey (check out the difference). You can even implement your own nuget feed.
  • So that means you could bundle certain things in packages (for example, implement cross-layer concerns, AOP stuff, etc. in nuget packages) and then add them to your nimbler solutions
  • The new VS2015 brings great improvements to the web project idea, doing away with the classic solution approach we are so familiar with and taking the .net developer to the folder-based solution structure, as we are used to see in many other languages. With this, there comes semantic versioning, and json files for handling dependencies, the ability to bring grunt.js and gulp.js on board, introduce better minification and uglification into the build process, and so on, so in this sense the .net developer has been brought into the same arena as developers in the .js world.
  • Use Web API or REST-ful traditional WCF web services – if you have such legacy services, they can be restified easily (pdf)-, or NancyFX, or ServiceStack
  • You can have async as well, of course

So it’s perfectly possible to get the same approach (except the full-stack thing). You can dockerize too those services, if you want. This way you achieve this same “component”-oriented architecture. Don’t miss the “Docker for .net developers” Channel 9 (by Dan Fernandez) series if you want to gain a clearer picture of how Docker and .net applications fit together.Naturally, you get the same drawbacks:

  • Deployments get more complicated. No need to argue that a monolithic application could be easier to deploy, although often those deployments are riddled with fears as well. At the same time though, error surface in deployments gets reduced, or more scoped, in the sense that if only one service is causing errors, at least you know where to look, and scope is not as big as in a monolithic application.
  • As deployments get more complicated, you certainly need to automate testing and deployment, so depending on your current practices, or skill sets, this might be possible, or downright impossible if a previous effort to bring operations up to date is not done first. So, many companies will find themselves not to be in the right position for this point and the previous one. Especially if there is no culture of DevOps.
  • Operations get more complex, hosting, managing, lifecycles, monitoring more systems, more processes, more logs (integrated views needed here) etc.
  • Need to consider messaging patterns, need to consider how to handle transactions, if at all, or compensating them, need to consider how to test processes spanning several services that interact together.
  • Need to consider sharding and partitioning your database. After all, what use is deploying redundant copies of your services if they’re all going to be talking to the same database?  You get a potential bottleneck and a dangerous weakest link in your system. If using databases such as mongoDB, you need to have replica sets, or similar mechanisms in other systems.

Significant challenges lie ahead. Probably it’s safe to assume that not everything is quite mature in this confluence space between microservices, containers, and the usual, and not so easy, concerns of scaling, replication, hosting, clustering, service discovery inside containers, resource and performance (cpu, ram, disk etc) and so on and so forth. Kubernetes is something to watch closely in the near future, now that we can run on Linux too.  Containers can have their own security issues, especially if you download images you’re not 100% sure about their provenance.Dealing with failed services / containers.  I think that bringing some of the strategies the actor model proposes in the face of errors could be interesting. After all, in a way one can imagine containers to be somewhat like actors, they’re cheap, lightweight (actors are better in this sense), easy to start or to kill which brings me to the point where maybe considering a pure actor model ( -ported from classic Akka- or even Microsoft Orleans) could be a better option. You can have millions of them, ready to work for you behind your API gateways. I think that could be a nice idea, or at least it looks very nice on paper. Surely it is not so easy to implement as it is to dream it.  Certainly we’re in for a big sea change :)