Archive

Author Archive

Ecosystems and the myth of overpopulation

January 11th, 2023 No comments

This post is about a topic I am quite interested in, “triggered” by a post by the Copenhagen Institute for Future Studies discussing, contrary to what is mainstream now, the question of whether causal relationship between population growth and biodiversity decline exists or if discourse surrounding supposed ‘overpopulation’ actually do more harm than good?

While I wish the interview linked to in the referenced post went a bit deeper exploring the topic, it is high time that other voices bring this to the fore. The (disproved) Malthusian anti-human speech from the supposedly ecologist leftist mainstream is an entrenched meme of the political Zeitgeist, but the bidirectional relationship between population, especially population in rural areas, and biodiversity has not been paid any attention, probably as it does not fit the underlying goals of said mainstream and their ultimate goals.

General population decline and increasing concentration of that declining population in just a few cities in each country hurt the long term prospects of agricultural and livestock farming (non-industrial) activities but the preservation of rural ecosystems is in symbiosis with those activities. During electioneering cycles much lip service is paid by governments and institutions to this loss of rural population (not so much to the general population decline and skewed population pyramids, since the Ehrlich-like ideas still permeates their thinking) but not a lot seems to be actually being done.

As the article says, “we need to do is find out how we can sustain that transition to higher quality lifestyles without massively increasing the rate of consumption. A myopic focus on population control will not get us there.” Paradoxically forcing people to concentrate in a few cities probably leads to worse consumption patterns, like importing foods from further places, loss of quality of life, cultural heritage loss and so on.

The abandonment of said economic activities means a greater depopulation in large economically marginal spaces that will make it difficult to manage new extensive forest masses created naturally or by reforestation, which brings a growing virulence of forest fires we see during Summer in South Europe, leading to greater desertification. This is a vicious feedback loop which accelerates rural population loss and abandonment of large areas of land, a problem especially notable in places like Spain. Crop abandonment means that local food is also no longer produced and the countryside is left to waste with no proper management (wildfires again).

This situation leads to a kind of rural desertification caused by the break of an ecosystem that had been in place for years, maybe even centuries. Such changes also affect the local fauna already adapted to that ecosystem.

Efforts for vegetation recovery is costly in terms of economic investment, which will only be amortized in the longer term if there are incentives and economic activities that ensure its protection and viability. Even if publicly funded, you need to tax that capital first, and for that you need a productive economy, which can only be made to work by a non-declining population with enough younger people that will become eventual producers and consumers for services and products. I’d argue we need more people, not less, to not only keep our current economic systems running as well as our wealth and welfare levels, but also to improve and innovate on them, as mentioned by the article. Less people ultimately means less economic incentives for investment as there is less market for good and services. Therefore, not only depopulation but also deindustrialization, some sort of undefined rollback.

It is an uncomfortable fact that only richer societies can afford the luxury of taking good care of their environment.

Musings on the use of ChatGPT for developers

December 28th, 2022 No comments

 

On the use of ChatGPT in software development

[ Disclaimer: part of this content was generated with the help of ChatGPT for some meta- oomph as the result of several prompts but also heavily sanity-checked, edited and expanded by a human with the (at least some) relevant expertise. ]

As a software engineer, you can use ChatGPT in a variety of ways, including:

  1. Code generation: generate code snippets or entire basic programs based on a set of input parameters or requirements.
  2. Documentation: help generate documentation for your code, including API reference materials and technical guides. You can use ChatGPT to generate code documentation by providing it with a prompt that describes the code that you want to document and the information that you want to include in the documentation. ChatGPT will then generate text that is designed to explain the code and provide context for other developers who may be reading the documentation.
  3. Debugging assistance: help debug code by providing suggestions for potential solutions to errors or issues that you encounter.
  4. Collaboration: to facilitate communication and collaboration with your team by generating responses to messages or questions in real-time.
  5. Learning and exploration: to explore new programming concepts or other topics by asking it questions and receiving explanations and examples in return.

It’s important to note that ChatGPT is a tool and, like any tool, it should be used responsibly and in the context of a larger workflow. It is not a replacement for human expertise or judgment, and it should not be relied upon to make critical decisions or solve complex problems without proper oversight and review. More on this later in this article.

As the disclaimer in this page says, you need to not blindly trust the output of such tools, they are an aid that can certainly scale your productivity and even the quality of your output if properly used. However, you need to exert the same precautions that apply when copying code and solutions from sites like Stack Overflow or any similar advice and content from the internet.

 

Code Generation

Again, this won’t give you production code, but if you provide descriptive enough prompts, you can start from a good start already.

Take this prompt for example Generate for me a C# controller with Create, Read, Update and Delete methods with XML documentation in .Net format that takes a SiteInspection Data Transfer Object as well as Auditor information and implements a Respository patter with Dapper to write and read from the database. and ChatGPT generates this example below for us.

using Dapper;
using Microsoft.AspNetCore.Mvc;
using System.Data.SqlClient;
 
namespace SiteInspectionApp.Controllers {
    /// <summary>
    /// This controller handles CRUD (Create, Read, Update, Delete) operations for site inspection records using the repository pattern and Dapper to access the database.
    /// </summary>
    public class SiteInspectionController: Controller {
        private readonly string _connectionString;
 
        public SiteInspectionController(string connectionString) {
          _connectionString = connectionString;
        }
 
        /// <summary>
        /// Creates a new site inspection record.
        /// </summary>
        /// <param name="siteInspection">A DTO (Data Transfer Object) containing information about the site inspection, including the location, date, and notes.</param>
        /// <param name="auditor">A DTO containing information about the auditor conducting the site inspection, including their name and credentials.</param>
        [HttpPost]
        public void Create(SiteInspection siteInspection, Auditor auditor) {
          using(var connection = new SqlConnection(_connectionString)) {
            connection.Open();
            connection.Execute(
              "INSERT INTO SiteInspections (Location, Date, Notes, AuditorName, AuditorCredentials) " +
              "VALUES (@Location, @Date, @Notes, @AuditorName, @AuditorCredentials)",
              new {
                siteInspection.Location,
                  siteInspection.Date,
                  siteInspection.Notes,
                  auditor.Name,
                  auditor.Credentials
              });
          }
        }
 
        /// <summary>
        /// Retrieves a site inspection record.
        /// </summary>
        /// <param name="id">The ID of the site inspection record to retrieve.</param>
        /// <returns>A DTO containing the site inspection information, or a 404 error if the record was not found.</returns>
        [HttpGet]
        public IActionResult Read(int id) {
          using(var connection = new SqlConnection(_connectionString)) {
            connection.Open();
            SiteInspection siteInspection = connection.QueryFirstOrDefault < SiteInspection > (
              "SELECT * FROM SiteInspections WHERE Id = @Id",
              new {
                Id = id
              });
            if (siteInspection != null) {
              return Ok(siteInspection);
            } else {
              return NotFound();
            }
          }
        }
 
        /// <summary>
        /// Updates an existing site inspection record.
        /// </summary>
        /// <param name="siteInspection">A DTO containing the updated site inspection information.</param>
        [HttpPut]
        public void Update(SiteInspection siteInspection) {
          using(var connection = new SqlConnection(_connectionString)) {
            connection.Open();
            connection.Execute(
              "UPDATE SiteInspections SET Location = @Location, Date = @Date, Notes = @Notes " +
              "WHERE Id = @Id",
              siteInspection);
          }
        }

 

You be the judge of this code looks and feels, or what things are missing, such as for example validating the models or more robust error and response handling, as well as the security and secure coding angle. That being said, generating code like this is quite a help if you need something quick and dirty, for testing something, or for some little tool you will use yourself and then throw away.

 

Generating unit tests

 

Unit tests also tend to be overlooked, and writing lots of them can feel repetitive (albeit there is a dopamine effect to seeing all of those little green balls when all tests assert correctly). ChatGPT can help you with this.

For a prompt that builds on the previous example, Generate for me the unit tests for a C# controller with Create, Read, Update and Delete methods that takes a SiteInspection Data Transfer Object as well as Auditor information and implements a Respository pattern. The repository must be an injected dependency that can be mocked from the unit tests. we get something like

 

using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;
 
namespace MyProject.Tests {
  [TestClass]
  public class SiteInspectionControllerTests {
    private Mock < ISiteInspectionRepository > mockRepository;
    private SiteInspectionController controller;
 
    [TestInitialize]
    public void Setup() {
      mockRepository = new Mock < ISiteInspectionRepository > ();
      controller = new SiteInspectionController(mockRepository.Object);
    }
 
    [TestMethod]
    public void TestCreateSiteInspection() {
      // Arrange
      SiteInspectionDTO siteInspection = new SiteInspectionDTO {
        Id = 1,
          Date = DateTime.Now,
          Location = "Test Location",
          Auditor = "Test Auditor"
      };
 
      // Act
      controller.Create(siteInspection);
 
      // Assert
      mockRepository.Verify(repo => repo.Create(siteInspection), Times.Once());
    }
 
    [TestMethod]
    public void TestReadSiteInspection() {
      // Arrange
      int id = 1;
      SiteInspectionDTO expectedSiteInspection = new SiteInspectionDTO {
        Id = id,
          Date = DateTime.Now,
          Location = "Test Location",
          Auditor = "Test Auditor"
      };
 
      mockRepository.Setup(repo => repo.Read(id)).Returns(expectedSiteInspection);
 
      // Act
      SiteInspectionDTO actualSiteInspection = controller.Read(id);
 
      // Assert
      Assert.AreEqual(expectedSiteInspection, actualSiteInspection);
    }
 
    [TestMethod]
    public void TestUpdateSiteInspection() {
      // Arrange
      SiteInspectionDTO siteInspection = new SiteInspectionDTO {
        Id = 1,
          Date = DateTime.Now,
          Location = "Test Location",
          Auditor = "Test Auditor"
      };
 
      // Act
      controller.Update(siteInspection);
 
      // Assert
      mockRepository.Verify(repo => repo.Update(siteInspection), Times.Once());
 
    }
 
    [TestMethod]
    public void TestDeleteSiteInspection() {
      // Arrange
      int id = 1;
      // Act
      controller.Delete(id);
 
      // Assert
      mockRepository.Verify(repo => repo.Delete(id), Times.Once());
    }
  }
}

 

Provide an example of a XUnit test that tests for a custom SiteInspectionExpection type. The test passes when the exception is thrown by the system under test.

using System;
using Xunit;
 
namespace MyProject.Tests {
  public class SiteInspectionTests {
    [Fact]
    public void TestSiteInspectionException() {
      // Arrange
      var systemUnderTest = new SystemUnderTest();
      // Act and Assert
      Assert.Throws < SiteInspectionException > (() => systemUnderTest.MethodThatThrowsSiteInspectionException());
    }
  }
}

 

Generating documentation

Few developers want to write good documentation, it is time consuming and perceived and not fun, even when it is a critical part of the SDLC workflow, either if you are writing code for others to use, or simply a LoB application that you will have to maintain later on.

Using ChatGPT you can generate snippets such as this below that you can take as a start for Markdown readmes and documentations for your project. This can be an aid when creating Azure DevOps wikis or GitHub documentation for a repo.

 

## SiteInspectionController
 
This controller is responsible for handling requests related to site inspections.
 
### Input
 
- `siteInspection`: A SiteInspection DTO object containing information about the site inspection
 
- `auditor`: An AuditorDTO object containing information about the auditor who conducted the inspection
 
### Output
 
- HTTP status code 200 (OK) if the site inspection is successfully recorded
 
- HTTP status code 400 (Bad Request) if the request is invalid or missing required parameters
 
- HTTP status code 401 (Unauthorized) if the request is not authenticated
 
- HTTP status code 403 (Forbidden) if the user does not have permission to perform the requested action

 

public async Task < IActionResult > RecordSiteInspection(SiteInspectionDTO siteInspection, AuditorDTO auditor) {
  // Validate the request and perform authentication/authorization checks
  if (!ModelState.IsValid || !IsAuthenticated || !IsAuthorized) {
    return BadRequest();
  }
 
  // Record the site inspection
  try {
    await _siteInspectionService.RecordSiteInspection(siteInspection, auditor);
    return Ok();
  } catch (Exception) {
    return StatusCode(500);
  }
}

 

Or for example, with this prompt Generate documentation in XML .net format for a C# MVC controller that takes a SiteInspection DTO, an AuditorDTO and can return common http status codes such as 200 or 400 you can get this output, which can be handy to generate stubs of the XML comments in .Net format.

The implementation is obviously a placeholder basically, but in this case we were more interested in saving some time writing documentation.

 

using Microsoft.AspNetCore.Mvc;
 
namespace SiteInspectionApp.Controllers {
 
  /// <summary>
  /// This controller handles requests for creating and retrieving site inspection records.
  /// </summary>
  public class SiteInspectionController: Controller {
 
    /// <summary>
    /// Creates a new site inspection record.
    /// </summary>
    /// <param name="siteInspection">A DTO (Data Transfer Object) containing information about the site inspection, including the location, date, and notes.</param>
    /// <param name="auditor">A DTO containing information about the auditor conducting the site inspection, including their name and credentials.</param>
    /// <returns>An <see cref="OkResult"/> if the site inspection was successfully created, or a <see cref="BadRequestResult"/> if the request was invalid or there was a problem with the provided data.</returns>
 
    [HttpPost]
    public ActionResult Create(SiteInspection siteInspection, Auditor auditor) {
      if (ModelState.IsValid) {
        // Save the site inspection and auditor information to the database
        return Ok();
      } else {
        return BadRequest();
      }
    }
 
    /// <summary>
    /// Retrieves a site inspection record.
    /// </summary>
    /// <param name="id">The ID of the site inspection record to retrieve.</param>
    /// <returns>An <see cref="OkObjectResult"/> containing the site inspection information, or a <see cref="NotFoundResult"/> if the record was not found.</returns>
    [HttpGet]
    public ActionResult Get(int id) {
      // Retrieve the site inspection with the specified ID from the database
      SiteInspection siteInspection = _siteInspectionRepository.GetById(id);
      if (siteInspection != null) {
        return Ok(siteInspection);
      } else {
        return NotFound();
      }
    }
  }
}  

 

Learning

Perhaps this is more similar to Googling for things and find good content that provides a good answer to your query. Learning does not necessarily have to be for purely technical things, you could also explore topics that a bit more exotic. Let’s go with some less technical for this

 

What are good meditation techniques for software developers?

Meditation can be a helpful practice for software developers to manage stress, improve focus and concentration, and increase overall well-being. Here are a few meditation techniques that may be particularly beneficial for software developers:

– Focus meditation: This type of meditation involves focusing your attention on a single object or sensation, such as your breath or a mantra, and letting go of distracting thoughts as they arise. This can help improve concentration and clarity of mind. – Body scan meditation: In this practice, you lie down or sit comfortably and focus on each part of your body, tensing and relaxing each muscle group in turn. This can help reduce stress and increase relaxation. – Loving-kindness meditation: This involves silently repeating phrases of love and compassion to yourself and others. This can help increase feelings of connection and reduce stress and anxiety. – Transcendental meditation: This involves repeating a mantra or word in a specific way, typically for 20 minutes twice a day. This can help reduce stress, improve focus, and increase overall well-being.

It’s important to find a meditation technique that works for you and to be consistent in your practice. It may take some time to see the benefits of meditation, so it’s important to be patient and stick with it.

Your mileage might vary with this one 🙂

 

 

Caveats

Stanford University researched have published an interesting paper titled “Do users write more insecure code with AI assistants?“ which explores the problems these AI tools can introduce if blindly trusted and when human judgement and supervision abdicates shifting the burden to the tool almost exclusively, as mentioned here previously “AI assistants have the potential to decrease user pro-activeness to carefully search
for API and safe implement details in library documentation directly
“.

Not only that, this paper also shows that the output of these tools varies a lot with the quality and parameters of the prompts given by the user (called temperature) and that less experienced users tend to accept and trust the output and generated code in a less questioning way.

The paper seeks not to demonstrate whether AI assistants generate “buggy code”, which we already know they do – at least for now -, but whether using them causes programmers to be less diligent and allow the introduction of bugs, errors, insecure code, etc. as opposed to programmers who they don’t use them. For 5 experiments, the secure code generated by programmers not using AI assistants was 79% while that of programmers with AI assistant was 67%. “participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection“. With this, the study incidentally demonstrate too that classic vulnerabilities like the simplest forms of SQL injection are still not widely understood and make it to the final code too frequently.

In sum, the results suggest that while AI code assistants may significantly lower the barrier of entry for non-programmers and increase developer productivity, they may provide less experienced users a false sense of security, be it in programming or in any other area where the user can be fed incorrect output that they are not prepared to detect and fix or discard accordingly.

On the other hand, these revolutionary tools are still in their infancy, and they will probably reach – and then perhaps surpass – human expert parity soon, and that will be when these tools become exponential multipliers of human capacity and productivity.

 

Conclusion

A powerful tool that can yield benefits if used correctly, but that does not automatically produce the best code or the one that you need in a particular scenario. Still, the scaffolding and support it brings to the overall workflow can be a bonus for productivity.

Be aware that the model may produce correct sounding but incorrect answers. That’s because the model itself has no true inherent understanding of code, but rather know what correct code “looks like” for the prompt and based on the model it has been trained with, which includes all of Stack Overflow (I hope some curation took place there) and GitHub.

Being as precise as possible with your prompts is required for the highest probability of a “correct answer”, or at least for a good quality one.

The more context there is around your code, the less it can offer you a fitting “solution” or provide code that you need.

Apart from the mere writing of code, it can also be a good source of at least inspiration to get ideas and solve problems.

  • Use it for debugging. You can provide a small piece of code and perhaps it can come with an idea that gives you a solution or a new way of looking at the problem.
  • Get architectural and infrastructure options.
  • Write test cases and documentation, like we saw, to save some time.
  • Learn about implementation options or common design patterns by asking it to provide examples

Some prompts I tried, but did not check for correctness shown below. You can try questions like these, or variations thereof, to get a better idea of what the system can come up with.

  • Generate a yaml pipeline that builds a dockerized image of a To Do List API in GO and deploys the image to an Azure Function
  • Provide a canonical implementation of the Visitor Pattern in C# (or your language of choice)
  • Why should I not use PHP in mission critical applications?
  • What is wrong with this SQL code? (you provide the example, but bear in mind the caveats mentioned)

 

Categories: Uncategorized Tags:

Some notes on Mark Weiser’s Calm Technology.

September 26th, 2019 No comments

Just yesterday I heard about Mark Weiser for the first time. He is considered to be the father of ubiquitous computing, and he had a very interesting vision of how technology should behave and should help or enhance us. We could say we are now almost immersed in said ubiquity, but from his main tenets, I think we are immersed in an at least partially wrong kind of ubiquity.

 

These tenets are:

 

  • The purpose of a computer is to help you do something else. We sure pretty much agree on this one, at least on principle, but how many out there basically use their computer almost the same way they use their tv sets? that is, to escape life, not to help them achieve something better.
  • The more you can do by intuition the smarter you are; the computer should extend your unconscious. Now, this would be the real thing. Indeed we already enjoy a crude form of that, by constantly checking the web for facts and figures, missing some sort of direct brain search capabilities, which as of now remains in the realm of sci-fi.
  • The best computer is a quiet, invisible servant. Now, this is were we can see how things have gone awry. It’s more like we far too often are  servant to our devices via the constant attention they demand from us.
  • Technology should create calm. This is an important point, as our digital personas, constantly online, most of the time via several devices concurrently, seem to be more prone to anxiety and stress and shortened attention spans than to calm due to the constant alerts, notifications, vibrations and all manners of data snippets that snap at us all the time. I hardly see anybody that is actually calmer once submerged in this flow.

This idea of calm technology is organized around the concepts of center vs periphery (our peripheral reach):

What is in the periphery at one moment may in the next moment come to be at the center of our attention and so be crucial. The same physical form may even have elements in both the center and periphery. A calm technology will move easily from the periphery of our attention, to the center, and back. This is fundamentally encalming, for two reasons. First, by placing things in the periphery we are able to attune to many more things than we could if everything had to be at the center. Things in the periphery are attuned to by the large portion of our brains devoted to peripheral (sensory) processing. Thus the periphery is informing without overburdening. Second, by recentering something formerly in the periphery we take control of it” (this quote is from Amber Case’s book with precisely the title Calm Technology)

 


Warp descent

These are key observations, for much of the technology and digital products created today aim at being central all the time (this way lay monetization) which reduces our attentional capabilities and produces overburden. To row against that tide we need to make a deliberate effort if we are to avoid the information to take control of us instead of the other way around, and avoid technology creating stress instead of calm. There is “systemic friction” with overburdening and interrupting information.

The challenge seems evident then, how to design technology that respects our attention, not only that but ideally that it also help us improve our cognitive capabilities. This is not what we have now, even there are tools and workarounds to help safeguard our attention, the whole environment is not propitious, it is in fact geared against that, and requires users’ deliberate action or quite some settings tweaking, which is another way of us tendering to tech, and not tech being designed with our calmness in mind.

I am not an UX expert, so these lines are basically a serving of food for thought to try and think how we could build on calm principles and calm communcation patterns. Or how this idea of calmness could be applied not only  to the software and digital products we design, but also to teams and companies. That’d could have a real impact for the better for people and probably for some of our current social problems as well.

This reminded me of the slow food idea, and it seems this could be labelled as “slow tech”. Furthermore, could we imagine tech one day being as ubiquitous but also unobstrusive as the electricty we have at home? much of our current technology breaks without warning, or interrupts us with status or software updates, taking us out of our flow and away from our goals, and that basically lays to waste Weiser’s tenets enumerated at the top.

You can read the original paper here

Categories: rants Tags: , , , , ,

Never split the difference, brief book review

November 19th, 2017 No comments

A few days ago I finished reading “Never split the difference“, by Chris Voss. I of course don’t claim to be an expert at all in negotiation, but this is a great book. Although the author has an extensive background in hostage situations, kidnappings, etc. the advice and lessons in the book are no doubt very apt to be applied in all areas of life, whether in the workplace or also on one’s personal and family life. It borrows quite a lot from behavioral and neuro science areas without getting geek or scholarly, which is why it applies to almost any situation. In fact, it is not only a book on negotiation, you could say it is a very good sales book.

I almost don’t want to other people to read it and keep that competitive edge for myself that being said, of course gaining that edge require practicing the skills mentioned a lot, especially for real time execution.

A reader took the trouble to actually write down their notes on this list which basically gives you a very good view of the distilled advice in the book. It is, in fact, a very good idea to do this with book notes. I tend to write a lot on books, but this keeps the notes handy, more actionable and more easily reviewed when you don’t have the book at hand, which is quite often. I commit to do this myself for the best books I read.

 

Categories: books Tags:

Review of Exponential Organizations

June 27th, 2017 No comments

Cover of Exponential Organizations

Some say that the Singularity University is science fiction expensively packaged for execs, but even if one always needs to take grandiose claims and hyped inventions-to-be (or never-to-be) with a grain of salt and some skepticism, this is still a book anybody with an interest in innovation and technology should read. It is an enjoyable read and there is plenty of good ideas in the book, although these are not fleshed out in enought detail to make them actionable advice.

One of the most important points is the Massive Transformative Purpose (MTP), which is more than a vision or mission statement. This core idea has to be stated clearly in few words, but it has to be compelling enough to inspire the initial people, smart enough to nurture a community as it grows, and audacious enough to give plenty of room for growth.

Other key points from the book are the notions of SCALE and IDEAS:

S: staff on demand
C: community & crowd
A: algorithms
L: leveraged assets
E: engagement

I: interfaces
D: dashboards
E: experimentation
A: autonomy
S: social technologies

As a somewhat negative point, it necessarily has to take from the same examples as everyone else ( Uber, AirB&B, Twitter, GitHub, Local Motors, SnapChat, FaceBook etc) and in that sense it is too mainstream. It would have been nice to look at minor players making an impact in more significant areas, because, honestly, SnapChat does not really have an MTP.

If you like the Lean Start up approach, this book looks at some of the same ideas. It has some really interesting insights into how larger organisations can create, invest in, develop businesses that are free to grow and change without the constraints of the larger parent company. Gives great rules/guidelines for how to setup a business that could grow quickly, not so much from an actionable advice angle, but more from the inspirational outlook and the potential of great bold ideas.

My own copy is heavily underlined and annotated, but I leave here a list of quotes other readers have taken the trouble to put together in a list.

 

Categories: Uncategorized Tags:

arquitecturas escalables e invisibles

June 14th, 2016 No comments

AWS Lambda, Azure Functions, Google Cloud Functions, OpenWhisk from IBM … el mundo “serverless” se nos viene encima con todo el furor de un hype en celo … la opción de ejecutar código sin provisionar nada, meras “funciones” expuestas en internet. ¿Va a seguir el servidor de aplicaciones el mismo rumbo a la obsolescencia que el famoso mando intermedio? ¿Irá detrás el servidor web? No en vano el technology radar de abril del 2016 de ThoughtWorks dice que precaución con el servidor de aplicaciones (“hold“, como lo clasifican ellos) como pieza clave del ecosistema. Como siempre, depende y las cosas y las organizaciones varían mucho en su adopción o su pereza (los “laggards”). Pensemos también, por otra parte, en la extensión de las beneficiosas consecuencias – a priori – para temas de seguridad, patching, gestión, etc (¿malware si no hay servidor?). Otra vuelta de tuerca en la idea de correr cosas virtuales sobre cosas virtuales sobre cosas virtuales (“it´s virtual machines all the way down”).

A pesar de que la idea es vendida como si se pudieran simplemente “colgar” funciones independientes en la nube – ¿y en realidad no deja de ser un servicio web justo eso? – al final los proveedores lo que hacen es envolver esas funciones en una API gateway y exponerlas para que los clientes las llamen mediante el popular REST. Esto se apoya en que tiene que haber un entorno de ejecución, un contenedor con la máquina virtual (me refiero a la maquina virtual de Java por ejemplo, no a una maquina virtual en su otra concepción). La ventaja es tener ahora algo más ligero que un OS para poder correr una VM para poder correr una lambda, optando por apoyarse en un contenedor que tiene lo justo y necesario, o corriendo directamente sobre el hypervisor.

Citando de la pagina de AWS Lambda “the core components are Lambda functions and event sources. An event source is the AWS service or custom application that publishes events, and a Lambda function is the custom code that processes the events.” Esto me sugiere arquitecturas reactivas, observables, incluso eliminar ese viejo ESB… escenarios de integración y transformación de datos. Sugiere también sistemas difíciles de depurar, difíciles de entender y de razonar sobre ellos, como contrapartida. Sugiere programación funcional y construir sistemas enteros en este paradigma, combinándolo con un uso juicioso de CDNs. Desde luego, potencialmente puede significar muchos cambios a nivel de arquitectura de soluciones. Hay gente construyendo sistemas enteros con piezas (¿de lego?) como Auth0, Firebase, API Gateway, Lambda, SQS, S3, CloudSearch, Elastic Transcoder, etc.


robust profits

Evidentemente, no es que los servidores realmente desparezcan, sino que más bien desaparecen como preocupacion del desarrollador: problemas como despliegues, escalado, configuracion e incluso el propio sistema operativo se nublan en el olvido, mientras nos centramos en servicios y plataformas elasticas de computacion. La teoria dice que ahora es aun más facil construir sistemas complejos que pueden crecer, escalar, evolucionar a medida – el clásico sales pitch, que ya sabemos que no es tan sencillo ni tan bonito todo.

Para saber más:

Comparativa entre AWS Lambda y Azure Functions

ThoughtWorks sobre Serverless Architecture

What is serverless

entrada sobre Serverless en CloudAcademy

reflexiones sobre el rol de arquitecto…

March 14th, 2016 No comments

Entre las “recriminaciones” que son cliché habitual acerca de los arquitectos y su rol, además del síndrome de la torre de marfil, es habitual oír quejas acerca de esa tendencia entre a querer abandonar los proyectos una vez que la parte digamos “interesante” de los mismos está finalizada o a punto de finalizar (¿instinto de supervivencia? 😉 ). Es decir, que una vez que se ha planificado la arquitectura, se han agotado todos las excusas para asistir a infinitas reuniones y desayunos de trabajo, los viajes y se han planificado las ejecuciones y todo parece perfecto sobre el papel, lo famosos arquitectos desaparecen.  En realidad, no sé si realmente existen estas figuras mitológicas tal cual hoy día, tal vez antaño, pero yo no las he visto, aunque si he visto gente con los pies muy lejos de la tierra en todo tipo de posiciones.  No creo que sea solo culpa del ser humano individual (si bien es muy cierto eso de que “everyone wants to design, nobody wants to maintain”…), sino también de las organizaciones, que al fin y al cabo son construcciones colectivas.

 

Hay otra forma de ver las cosas: el arquitecto debería ser el propietario “end-to-end” de la solución.  Es más, este rol ni siquiera es algo fijo sino que debería cambiar o evolucionar en consonancia con las distintas fases del ciclo de vida del proyecto.  Creo que esto no se reivindica o se resalta suficientemente, tanto para unos como para otros. Teóricamente, nadie mejor que el arquitecto debería conocer la solución mejor.  A nada que uno haya estado en esta industria suficiente tiempo, se dará cuenta que poco o nada tienen que ver el papel y esos archivos .vsx iniciales con la implementación real del sistema al final de la “larga marcha”, y lo que parece muy completo y casi “perfecto” en los powerpoints – que lo soportan todo -, no tiene nada que ver con la realidad ( que básicamente es lo mismo que les suele pasar a esos ejercicios de wishful thinking que son los Gantt, especialmente en el caso flagrante de aquellos creados antes de tener claros los requisitos, ni de haber tirado una línea de código, los Gantt “promesa política” que los llamo yo, pero ese es tema para otro post ).  Al igual que ocurre con los dichosos Gantt que tanto daño han causado, es fútil pretender que se puedan conocer de antemano todas las circunstancias cambiantes, errores, problemas, asunciones y premisas que cambian y todo tipo de factores imposibles de anticipar, porque al final se trata de sistemas “vivos” (creo que los matices en “live” lo expresan mejor).

 

Debido a todo esto, es imprescindible que el arquitecto continúe en el proyecto – al menos en un rol de supervisión y liderazgo técnico dando soporte al equipo y a negocio -, hasta incluso durante la fase de entrega, donde sigue teniendo un rol importante.  No basta con ser un arquitecto en su torre de marfil, y se debe revalidar el papel del “coding architect” que conoce también la implementación de su solución y los requisitos de infraestructura, y como dar soporte a los requisitos no funcionales elicitados.  Ciertamente, hoy día se ha desgranado o especializado tanto el papel del arquitecto que el termino corre grave peligro de no significar nada, o de significar cualquier cosa, y dependiendo de la empresa tenemos simultáneamente cualquier combinación de data architect, business architect, integration architect, Infrastructure architect, solutions architect, Enterprise architect, information architect, delivery architect, UX architect, etc. etc.)

Data science


Al hilo de esto lo cierto es que tal vez el rol del arquitecto sea tan difícil de definir porque las habilidades y tareas del rol van cambiando según el proyecto va agotando fases en su imparable carrera hacia el <sarcasmo>más fulgurante éxito, directo al cuadrante mágico</sarcasmo>. Por lo que me toca más cercano, es fácil ver como el solutions architect debe irse convirtiendo en un delivery architect (podríamos decir arquitectos de entrega o algo así, pero queda fatal). Esto al final, no es más que insistir en el principio de responsabilidad (responsibility and accountability) y de propiedad (ownership –  a veces los vocablos ingleses parecen transmitir mejor la carga semántica).  Al principio será más el responsable de entender bien los requisitos, coordinarse con los stakeholders de negocio (son diferentes skills, como vemos, tanto técnicos como sociales) y entender los limites tecnicos, presupuestarios, manejar expectativas, el entorno político y cultural, y dar lo que parezca la mejor opción, a su juicio y con los datos disponibles.  Sin embargo, nada – y nunca mejor dicho lo de “nada” – acaba con la entrega de un montón de documentos, de entregables definidos por nuestro framework de arquitectura preferido.

 

El aspecto clave durante la vida del proyecto es la gestión técnica, alejándose de esos arquitectos extremófilos acostumbrados al aire enrarecido de la sala de juntas, donde de nuevo tenemos capacidades muy diversas que aportar: gestión de stakeholders y/o grupos de interés, gestión de deuda y riesgos técnicos, calidad, metodologías, gestión de equipos, “fit-for-purpose”, gobernanza y, sobre todo, asegurarse de que la solución no se desvía del alineamiento con la arquitectura empresarial y los objetivos tácticos o estratégicos a los que debe su existencia.

 

Aquí es donde probablemente la línea entre Solutions Architect y Delivery Manager se solapa mucho, y podemos hablar de otro rol más, el de Delivery Architect, y es que el delivery manager raramente tendrá el mismo conocimiento que el arquitecto de una solución específica, ya que éste es el propietario de la solución completa, por tanto es de suponer que el arquitecto sería el que tiene la mejor y más completa visión y el conocimiento sin el cual, el Delivery Manager tendría muy difícil ser capaz de cumplir con sus responsabilidades correctamente.  A menudo el factor decisivo en el éxito o no de un proyecto será la colaboración eficiente entre el Delivery Manager y el arquitecto, no la calidad sobre el papel, no la belleza del diseño de la solución en unos cuantos visio o UMLs. El arquitecto debe por tanto ayudar en todo al Delivery Manager para que se tomen las mejores decisiones. En esto el arquitecto pasa a ser un delivery architect. Su rol va – y debe hacerlo – mutando según evoluciona el proyecto. Esta figura que llamamos el delivery architect también debe de no perder de vista el foco en los requisitos no funcionales. No solo se trata de definirlos – que también – sino de asegurarse que el Sistema o la solución propuestas y desarrolladas se adhieren a dichos requisitos. No olvidemos que al final se trata de que la arquitectura de soluciones realizada sea operable y efectiva desde el punto de vista del coste y de las estrategias definidas a nivel empresarial.

A riesgo de sonar muy pomposo, cierro con una cita de un libro que no he leído

Se audaz y astuto en tus planes, firme y perseverante en su ejecución, decidido en encontrar su glorioso final.

Carl von Clausewitz, De la guerra 

 

 

Speed of understanding

December 4th, 2015 No comments

Is your code easy to understand, can someone new to the codebase grasp quickly most of the intent of the code? What is the speed of understanding of your code.While this is basically the old idea that code should be readable by humans – that includes business people and managers 😉 -, what strategies and tactics can you put in place to gain speed of understanding as an intrinsic quality in your codebase?The thing is that very rarely someone who actually wrote the code is the one that will also maintain for the entire lifetime of the software (and, well, even in that case, knowledge about a codebase is one of the most ephemeral things in life, especially in the case of significant – read, large – codebases).

When a codebase is not so well organized, it very quickly becomes so confusing that even the original team members find themselves scrambling and frantically searching through the codebase in order to find that snippet they wanted to reuse, that pesky function the name of which they can’t recall ( for some reason, some pieces of code or some function names are much harder to commit to memory and find quickly. Treat that as a warning that some action is needed there!).I assume that we are naturally working in a OOP environment, which probably lends itself best, although of course functional languages and strange beasts like javascript can do the trick as well.Two of the strategies (or tactics maybe) that help the most are:

 

  • Use of DDD, a well codified plain-object domain that models / reflects your business, which is how DDD goes about creating a shared vocabulary between business and engineering. That will lead you to having a DSL for the business domain.
  • Write the client code first – the client code that you would like to have to write. This scenario is greatly enabled by having some DSL first, and, of course, hindered by not having that DSL in place. This is similar to using TDD in what some call “outside in” TDD.

 

These are for me strategies, more that mere tactics. Both are supported of course by TDD, as you can actually use TDD as a “discovery” too for that client code you want to be writing. However, I am willing to bet that if you have non-trivial business knowledge about your domain already, you can skip that process of discovery and write the code you want to see (as in “be the change you want to see”, you know ), without that meaning “do not back up your code with tests.”  That being said,  it is easier said than done of course.

 

governance frameworks

 

Now, there are tactics as well. Nothing trailblazing here:

 

  • Excellent naming conventions and the discipline to uphold them. It’s already a distant past when one had to save precious bytes by being skimpy on names. Not anymore. Probably only in the case of lambdas, pattern matching, etc. should you be getting away with names like, x, _, temp, and so on.
  • Good commenting habits (there is a lot on this in that great book – Code Complete)
  • Refactoring to small single-purpose functions.
  • Do not overdo the “look ma how smart I am”. Sometimes, they say write code that is not “so smart” (clever one-liners just because I can, nested ternary operators, complicated lambdas), and while that is true to the extent that you don’t want to write production code like this or this, that does not mean you do not take advantage of (relatively) “advanced” features in your language, for fear that other developers coming in later are not familiar with those. They might never learn those!
  • Consistent style in all things: naming, spaces, indenting.
  • Share the code. Have other developers from other teams have a look. Even ask a lay person, your boss for example, to guess what a piece of code does.

 

You need to use the strategies underpinned by the tactics to achieve that speed of understanding.Would there be a way to actually measure that quality of speed of understanding? I am not that smart to figure that out. Probably not in an absolute way, as there are different styles and preferences for programming – some like fluent api style, others call it the train wreck, for example.For the time being maybe, it’s enough that you always keep that purpose in the back of your mind when coding, until it becomes a habit.
Let me have this as one purpose of mindfulness in coding from now on. Let’s be more aware of this when designing and coding.

Categories: coding, rants Tags: , ,

More stuff on microservices, this time for the .net world

July 15th, 2015 No comments

What factors are important for microservices? (a.k.a. fine-grained SOA). It’s said that Node.js is an excellent fit for a microservices approach – which lends itself very well to the fashionable container approach – as it has the following:

  • Excellent package system (npm)
  • Minimal effort to publish new packages (package.json)
  • Node.js itself encourages lightweight, narrow-focus services, not big monolith-ish services
  • Designed to be async I/O in order to achieve better scalability
  • End-to-end javascript stack

This is all very true, and we can add ease of “containerization” of those services.  What about those of us who work in the .net world? For years, the perception that the .net world was somewhat behind in certain aspects, such as DDD adoption (or the blissful ignorance of it), microservices, containers (which are coming soon in windows) and so on. However, I think the tooling is improving a lot this year and sweeping changes are coming to the platform with the advent of .net core, the new runtimes, etc. Well, we can have all of this, although we might have to make more of a conscious effort to steer our development practices and inertia towards a more similar approach.

  • We have a good package system, nuget, and we also have chocolatey (check out the difference). You can even implement your own nuget feed.
  • So that means you could bundle certain things in packages (for example, implement cross-layer concerns, AOP stuff, etc. in nuget packages) and then add them to your nimbler solutions
  • The new VS2015 brings great improvements to the web project idea, doing away with the classic solution approach we are so familiar with and taking the .net developer to the folder-based solution structure, as we are used to see in many other languages. With this, there comes semantic versioning, and json files for handling dependencies, the ability to bring grunt.js and gulp.js on board, introduce better minification and uglification into the build process, and so on, so in this sense the .net developer has been brought into the same arena as developers in the .js world.
  • Use Web API or REST-ful traditional WCF web services – if you have such legacy services, they can be restified easily (pdf)-, or NancyFX, or ServiceStack
  • You can have async as well, of course

So it’s perfectly possible to get the same approach (except the full-stack thing). You can dockerize too those services, if you want. This way you achieve this same “component”-oriented architecture. Don’t miss the “Docker for .net developers” Channel 9 (by Dan Fernandez) series if you want to gain a clearer picture of how Docker and .net applications fit together.Naturally, you get the same drawbacks:

  • Deployments get more complicated. No need to argue that a monolithic application could be easier to deploy, although often those deployments are riddled with fears as well. At the same time though, error surface in deployments gets reduced, or more scoped, in the sense that if only one service is causing errors, at least you know where to look, and scope is not as big as in a monolithic application.
  • As deployments get more complicated, you certainly need to automate testing and deployment, so depending on your current practices, or skill sets, this might be possible, or downright impossible if a previous effort to bring operations up to date is not done first. So, many companies will find themselves not to be in the right position for this point and the previous one. Especially if there is no culture of DevOps.
  • Operations get more complex, hosting, managing, lifecycles, monitoring more systems, more processes, more logs (integrated views needed here) etc.
  • Need to consider messaging patterns, need to consider how to handle transactions, if at all, or compensating them, need to consider how to test processes spanning several services that interact together.
  • Need to consider sharding and partitioning your database. After all, what use is deploying redundant copies of your services if they’re all going to be talking to the same database?  You get a potential bottleneck and a dangerous weakest link in your system. If using databases such as mongoDB, you need to have replica sets, or similar mechanisms in other systems.

Significant challenges lie ahead. Probably it’s safe to assume that not everything is quite mature in this confluence space between microservices, containers, and the usual, and not so easy, concerns of scaling, replication, hosting, clustering, service discovery inside containers, resource and performance (cpu, ram, disk etc) and so on and so forth. Kubernetes is something to watch closely in the near future, now that we can run on Linux too.  Containers can have their own security issues, especially if you download images you’re not 100% sure about their provenance.Dealing with failed services / containers.  I think that bringing some of the strategies the actor model proposes in the face of errors could be interesting. After all, in a way one can imagine containers to be somewhat like actors, they’re cheap, lightweight (actors are better in this sense), easy to start or to kill which brings me to the point where maybe considering a pure actor model (akka.net -ported from classic Akka- or even Microsoft Orleans) could be a better option. You can have millions of them, ready to work for you behind your API gateways. I think that could be a nice idea, or at least it looks very nice on paper. Surely it is not so easy to implement as it is to dream it.  Certainly we’re in for a big sea change 🙂

final state of new features in C#6

July 14th, 2015 No comments

Ok, this topic has already been blogged to death, so my 2 cts flogging a dead horse here. From earlier posts reporting the new features back in 2014, some of the features have been dropped, and others have changed their syntax. So below, I add some gists detailing the most salient features in the new version of the language coming with VS 2015. I will pass on explaining each one, since a) the examples are easy to understand and b) there’s plenty of further explanation in the web and I am too lazy to replicate that. Automatic property initializers Self-explanatory here, I think.

Using static
String Interpolation Access properties directly in the string template, instead of the old tired way with the ordinal placeholders. Cleaner syntax and less prone to errors of placement and order. Notice the $.


Null comparison operator This is really a welcome change, instead of polluting code with null checks or implementing extension methods to avoid that. Now you can check for null references and null properties in a much cleaner and more natural way:
Inline Event Handlers


Exception filters This is a welcome change as well, for cleaner management of exceptions. You might have seen this is a previous syntax with if instead of when.
nameof Although this could be emulated before, it is nice to have this and avoid hardcoding parameter names.


Expression-bodied members
Awaitable catch and finallyThis works as well for finally blocks. New way of initializing dictionaries
And now, the stuff that has been dropped Stuff that has been dropped IEnumerable params This does not compile:


Declaration Expressions

Categories: c# Tags: