Loading...

DevOps: The Past, Present and Future!

DevOps: The Past, Present and Future!
Srikanth Sundararajan, partner – VenturEast
Loading...

During the late 80s and early 90s, I had the opportunity to work with great minds in the industry, especially in software development tools. I was part of the team which worked on Hewlett Packard Softbench – the first integrated software development environment on the Unix platform (HP-UX). It included multiple language bindings (C/C++/Cobol), language sensitive editors, static, dynamic analyzers, as well as integrated version control systems such as RCS/SCCS/History manager. The notions of parallel check-out of source code and the 3-way merge were exciting problems to work on.

ClearCase, the cadillac of configuration management systems, was a great platform which allowed distributed software development across WAN environments, leveraging proxy servers, and synchronizations. We got to work with the teams at Apollo/HP who had worked on the Doman Software Engineering Environment (DSEE). The Apollo and DEC had the foresight to build an integrated set of development tools into their respective operating systems. It was a great opportunity to learn from the experts in this area, and who happened to be great mentors as well, and every day provided wonderful learning experiences.

Yesterday

Loading...

The initial era of DevOps made for the foundation and brought along a variety of challenges and experiences with it.

Challenges of releasing products in the early days felt like the role of all our team members, right from open-source projects like VIM, or working on the Mosaic browser at graduate school, and later being part of several product teams at HP and Informix were the same

Challenges around setting up the development environment were evident right from when we had to set our hardware environment, install patches via tapes and then later discs. We then ensured that they were synchronized while setting up the LAN, (including configuring routers, ensuring IP addresses for isolation, adding networked file systems for common stores for source code as well the deployed bits). We did everything from the front end, back end, databases, network-oriented communication (sockets/RPC). Front end work was also evolving, we had exposure to the X windows environment thanks to Project Athena (MIT). We had exposure to distributed file systems, typed files system, thanks to the Andrew project (CMU). We also had to find ways to make our code efficient from a memory perspective, so needed to figure the optimal use of shared libraries (‘static’ initially, ‘dynamic’ later). Once the system was built, we would deploy it for testing, including installs in a sandbox environment with its own tools and related elements.

Loading...

Challenges of releasing the product included going through the edit-compile-debug cycle, followed by integration and systems builds because most were several small teams working on a product; act as the systems admin, spin up new servers when required, install third party packages if required, sync the OS versions, etc. Then came the pre-Beta environment, where the product media was cut, tested via physical installs and shipped to beta customers. The feedback would be incorporated, and the entire cycle repeated until overall quality and release criteria were met/exceeded.

In short, you had to be good in all aspects of the development-to-deployment cycle!

Today

Loading...

With the advent of the web browser, Java, interpretive languages such as TCL/TK, and Perl, the world began to change with the push towards larger online systems. Hardware was becoming a commodity. Several new IDEs from the likes of Sun Microsystems, IBM, Symantec and Microsoft were becoming popular, they started to bring together the worlds of development and deployment together, as they came bundled with an application server, (i.e., JBoss, Tomcat, Apache, WebLogic, WebSphere and others), thus, one could package code into JARs, EARs, or WARs and deploy, and perform end-user testing. Third party libraries, open source or otherwise, were also prevalent and needed to be packaged. The extraction and installation had to be done in a dependency ordered manner, and prior to a run, a whole set of services were required to be turned-on or reviewed.

Clearly, the world is evolving towards continuous development and deployment, and different specialized skills are required to be developed along the way.

We have moved from deploying into our own environments and putting SFTP sites for customers, to downloading the bits, the latter of which is still relevant today. Everything has moved to the cloud thanks to offerings from Amazon Web Services, Microsoft-Azure, Google Cloud Platform and others. You no longer need owned physical environments, security, and core services as the systems administration work is taken care by the cloud service providers.

Loading...

Other noteworthy technology trends have been non-SQL Databases like AeroSpike, MongoDB, DFSs such as Hadoop, GFS (peripherals or add-ons like HIVE, Zookeeper), and evolution of newer development stacks from the original LAMP stack to MEAN, MERN, MEVN, Flutter (cross web development), Server-less offerings (AWS Lambda) and others. 

The utilities for search like Solr, Elastic, and Streaming utilities for data, video, audio, availability of easy API, integrations (mainly REST) where everything was API accessible and measurable on a usage basis. All of this had to be tested and deployed on a continuous basis. These skills were more specialized, focused on packaging, leveraging workflow environments like Chef or Puppet along with custom scripts (bash or python) had to be written, especially for pre- and post-deployment checks.

The evolution continues in this context with the advent of microservices architecture and container technology that allows subsystems to be packaged and deployed in a dependency-oriented manner.

Loading...

Companies like Postman, Hasura.io, SignOz, Minjar, and Calm.io have all focused immensely to help alleviate various issues in this frenzied development to the deployment world!

Today, the job descriptions like a full stack developer, or a DevOps lead are common, one often wonders what it was in the late 80s early 90s. In short, the roles have become more specialized, and need to be segregated. DevOps or deploy and manage operations, ability to debug issues is super-critical given that most businesses are online and, on the cloud, roll back and deploy are crucial, and understanding all run time dependencies in a MUST!

Tomorrow

Loading...

DevOps will continue to be relevant and important as we look into the future, perhaps more specialization will be required. For example, the mobile DevOps, or IoT (internet of things) related over-the-air updates, or specialized data-oriented deployment-related to machine learning models – the term ML Ops is being bandied around, why? This is because the workflow in the world of AI/ML is tight-looped involving data, leading to model variants, delta data introductions and continuous refinement leveraging various statistical and non-statistical tools. The model variants need their own context.

So, we shall see a whole slew of specialization within DevOps as we know it today. The recent funding of Tekion is exciting, they are going to be a smart platform for all vehicles, given their Tesla DNA; Imagine how that will impact the world of DevOps. IoT integrated plays are also going to need specialized skill from a deployment and monitoring perspective. In summary, DevOps is here to stay and will continue to evolve!

Srikanth Sundararajan

Srikanth Sundararajan


Srikanth Sundararajan is partner at Ventureast. The views in this article are his own.


Sign up for Newsletter

Select your Newsletter frequency