JanWiersma.com

Lower software dev cost? No!

The software development landscape is on the cusp of a revolution. Large Language Models (LLMs) promise to streamline workflows, automate repetitive tasks, and even generate code. This translates to exciting possibilities: cheaper development and faster time-to- market, as seen in the recent advancements showcased at Microsoft Build 2024’s keynote this week. GitHub Copilot and Copilot Workspace are prime examples of how LLMs are being leveraged to empower developers.

But here’s the Jevons Paradox lurking in the shadows, and its impact might be even more significant with LLMs. Remember the paradox? As coal became cheaper and more abundant, its use skyrocketed, ultimately leading to a greater (not smaller!) energy demand.

Imagine applying this to software. With LLMs lowering the barrier to entry, custom solutions become not just possible, but expected. Think niche functionalities tailored to individual workflows, features that were previously cost-prohibitive.

Jevons on Steroids: Here’s where LLMs amplify the effect. Because LLMs can adapt and learn at an incredible pace, user expectations will likely accelerate. They’ll not only demand custom solutions, but also expect them to evolve rapidly alongside their changing needs.

The Challenge: Keeping Up with a Moving Target

* Customers demanding more: Lower costs will fuel the fire for rapidly adaptable features, including options for niche functionalities specific to individual workflows.
* The pressure to keep up: This surge in demand will necessitate continuous investment in LLM training data, development tools, and most importantly, upskilling our workforce. Developers will need to learn to effectively utilize and guide LLMs to meet these ever-evolving needs. Stagnation means losing customers to competitors who can adapt faster.

LLMs will undoubtedly make development more efficient. And it will be cheaper as directly compared to the current speed of execution and customer expectations. But the key takeaway is this: the cost savings won’t be a one-time win. To thrive in this new paradigm, we need to embrace a culture of continuous improvement and invest in keeping our LLM-powered development tools on the cutting edge, alongside a skilled workforce ready to leverage their power.

Share

The Decentralized Web: Reclaiming Our Power

Remember the Wild West days of the internet? Unfettered innovation, boundless potential, and a sense of control over your online experience. Today, it feels increasingly like a battleground. Governments tighten their grip with regulations, while corporations gobble up our data and influence every click. This is the future the book ‘The Sovereign Individual’ warned about – a future where individuals have little control over their digital lives. But it doesn’t have to be this way.

The Decentralized Web: Reclaiming Our Power

There’s a growing movement advocating for a fundamental shift: a decentralized web powered by cutting-edge technologies. This isn’t just about technology; it’s about reclaiming power and building a future YOU control.

Technical Innovation Leading the Charge:
*Nostr: Imagine a social media platform where no single entity dictates the rules. Nostr, built on a censorship-resistant protocol, empowers you to own your data and experiences.
*Web5: This next-generation web leverages blockchain technology, giving you greater ownership over your data and online interactions, aligning perfectly with Self-Sovereign Identity (SSI) principles. With SSI, you control your digital identity, deciding who can access your information.
*AI: Decentralized AI systems can analyze data on a distributed network, enhancing security, transparency, and user experience without compromising individual privacy.
*Cryptography: Protocols like Nostr utilize cryptography to ensure secure communication and data ownership, making it harder for entities to exploit your personal information.
*Blockchain: This foundational technology underpins many decentralized solutions, providing a secure and transparent way to store and manage data.

Why Decentralization Matters:
*Empowering Individuals: A decentralized web puts you in the driver’s seat. You control your data, choose who can access it, and have a say in how online platforms operate.
*Reduced Centralized Control: Decentralization weakens the grip of powerful entities, fostering a more equitable digital landscape where innovation and competition thrive.
*Enhanced Privacy: Decentralized systems make it harder for governments and corporations to collect and exploit your personal information.

The Road Ahead: Shaping Our Digital Future

Building a truly decentralized web won’t be easy, but the potential benefits are immense. We have the technology; now we need the action. Let’s embrace these innovations and build a future where the internet empowers individuals, not corporations or governments. This is the future we should aim for, a future where Self-Sovereign Identity and decentralized technologies are the norm.

One may think this is all just about technology, but it isn’t – it’s about the kind of future we want to create. What kind of digital world do you envision for yourself and your loved ones?

Share

The tale of Services v.s. Cloud product organizations

<This blog is background material as part of my 2017/2018 VU lecture series >

 

As companies transition their product delivery methodology from on-premise software to a As A Service (PAAS/SAAS) model, they are confronted with very different motions across their Sales, Marketing, Development, Services and Support organizations.

One of the examples that show the difference how ‘execution’ is done in these models, is how Services and Product is managed across the organization. For larger on-premise software companies it is not uncommon to see Professional Services (PS) bookings v.s. software bookings rates of >3, meaning that customers pay more for the implementation and assistance in software management, then the actual purchase price of the software.

The Cloud delivery model has very different PS dynamic, as Waterstone reports in their 2015 report – changing Professional Services Economics ;

“There is growing preference for Cloud- and SaaS-based solutions that, on average, have a PS attach rate around 0.5x to 1.0X (versus the 2.9x PS attach rate commonly seen with traditional licensed products).”

The analysis is not strange, as Cloud is all about providing low friction of onboarding by self-service and automation. This means getting the human out of the equation, as it’s a limiting factor in scalability and raises cost.

Cloud is all about minimizing the time from idea -to- revenue, while being able to scale rapidly and keeping cost low.

The definition of ‘the product’ in a Cloud world therefore isn’t only about the bits & bytes, but includes successful onboarding of the customer and maximizing their usage.

Continue with reading

Share

Lecture: Operate unreliable IS in a reliable way

I recently started a lecture series at the Vrije University of Amsterdam (VU). As part of this I did a lecture on how to operate unreliable Information Systems in a reliable way – or: Everything breaks, All the time.

Synopsis:

Behind the clouds of cloud computing! How can we reliably operate systems that are inherently unreliable?

What if for some hours, we do not have access to the services such as navigators, routers, and other communication technologies? It seems our life will be at stake if major digital services fail! Many promises of digital technologies, from big-data to the Internet of the things and many others are based on reliable infrastructures such as cloud computing. What if these critical infrastructures fail? Do they by the way fail? How the responsible companies and organizations manage these infrastructures in a reliable way? And what are the implications of all this for companies who want to base their business on such services?

As part of the lecture we explored modern complex systems and how we got there, using examples from Google and Amazon’s journey and how it relates to modern enterprise IT. We used the material of Mark Burgess to explore how to prevent systems from spiralling out of control. We closed off looking at knowledge management based on the ‘blameless retrospective’ principles and how feedback cycles from other domains are helping to create more reliable IT.

Relevant links supporting the lecture :

The used presentation can be found here:  VU lecture
Recording of the session is available within the VU.

VU Assistant Professor Mohammad Mehrizi posted a nice lecture review on LinkedIn, including a picture with some of the attending students.

 

Share

The AWS RI Marketplace – a ghost town ?

empty_market_by_auraomega-d81ql31

In 2009 AWS launched their EC2 Reserved Instance (RI) pricing model , providing a significant discount compared to the on-demand pricing model if you are willing to commit to 1 or 3 years of usage.

Cost management on AWS is a hot topic, with hundreds of blogs on making the right choice of RI’s. A new market of AWS cost management tools emerged, with tool vendors promising massive ROI. Based on AWS billing & usage data analysis, these tools will provide RI recommendations. AWS’s own Trusted Advisor will also provide this kind of analysis, as included in their higher level support plans.

I highly recommend my SDL FredHopper colleague David Costa’s presentation on the topic, as he basically wrote the (internal) book on how to do this at scale: Cost Optimization at Scale.

While you can shift RI reservations, after you made them;

  • Switch Availability Zones within the same region
  • Change between EC2-VPC and EC2-Classic
  • Change the instance size within the same instance type

There are several use-cases where you would want end your reservation before the reservation end date;

  • Switch Instance Types.
  • Buy Reserved Instances on the Marketplace for your medium-term needs.
  • Relocate region.
  • Bad capacity management.
  • Unforeseen business or technology changes.

Continue with reading

Share

DCD EMEA Awards 2016 Finalist !

Less then 24 hours after I published my ‘thank you team’ post, Datacenter Dynamics announced their nominees for this year’s DCD EMEA 2016 Awards.

I’m very proud one of the mentioned projects in my original post got nominated in the Category Cloud Journey of the Year

finalists_logos

The selected project is our move of SDL Machine Translation from our co-lo datacenter to a IAAS cloud solution;

Availability of content in multiple languages is key to driving useful international business. SDL’s Statistical machine translation delivers high quality translation services to more thousands of customers. While SDL’s research organisation already explored a new approach to machine translation, the future development and deployment needed more flexibility in technology choice and dynamic scalability to be commercially successful. Over 10 months, SDL migrated their current workload deployment consisting of hundreds of servers, without customer downtime, to a private Cloud deployment. The migration included a project team of more than 35 staff in 5 time zones. Besides flexibility and scalability gains, the migration saves SDL more than 450k GBP over 4 years.

The teams worked long hours, overcoming many obstacles a long the way. Congrats to all involved!

Share

Goodbye SDL

img_3207

After 3 very dynamic years, I’m leaving SDL today. It has been a great journey and I enjoyed every minute of it. Anyone who has followed SDL in the last 9 months has seen a lot of changes announced; divestment of 3 business units, new CEO, new CTO,…

While I personally think these changes are good for the company and it will bring focus and stability going forward, I also decided I wasn’t going to be part of that future anymore.

With this in mind, I shifted my focus in the last few months on helping to find a good home for the divesting business units. It provided me with the option to slowly step away from my day-to-day responsibilities without disrupting it too much.

During the hand-over period, you automatically get confronted with what you are going to leave behind. <cue music> Don’t Know What You Got (Till It’s Gone) </cue music> and the saddest thing to leave behind are actually my teams & peers.

Continue with reading

Share

Applying firefighter tactics to (IT) leadership

1-hsowjrjlMrNDh6Cjpod3JwThis week I will be celebrating my 15th year of active volunteer firefighter duty. As you naturally tend to do when celebrating milestones like these, is to reflect on the past years and learnings.
One thing that specifically stood out are moments in my IT leadership career, where I applied firefighter techniques and skills, I picked up over the years.
Most of them revolve around problem solving and how to get the most out of teams. While there is an obvious link between firefighters and solving issues in a high pressure or crisis situation, I did learn the same tactics also apply to any challenge I was confronted with.
When firefighters arrive at the scene of a fire, they always follow the same protocol;
-Assess the situation
-Locate fire
-Identify & control flow path
-Extinguish the fire
-Reset & evaluate
In business and especially at higher leadership levels some problems may seem very daunting, creating anxiety and leave you with the feeling of being overwhelmed. Firefighters are used to stepping in to highly unknown situations with confidence and as such a protocol like above helps to, step by step, gain control of the situation.

Continue with reading

Share

Seven years of Cloud experience in ten Tweets.

With AWS celebrating 10 years after the launch of Amazon S3 in March 2006 and Twitter also celebrating 10 years , I wanted to revisit my ‘cloud rules’ published on Twitter and on my Dutch blog in 2011. The original was written after 2 years working on an enterprise IT implementation of whatever was perceived ‘cloud’ in 2009, building a Gov version of Nebula (the Openstack predecessor) and starting to utilize AWS & Google Apps in enterprise IT environments.

As the original rules where published on Twitter with its 140 character limit, it lacked some nuance and context so I converted it in to a blog post. The original 7 rules from 2011, with context;

Even though the debate on a definition of ‘what is cloud’ died down a bit, it does still surface now and then. Given the maturity of the solutions , the current market state and the speed of change in the market of ‘cloud’ ,I still stick to my opinion from 5+ years ago: a generic definition of cloud is not currently relevant.

The most common used definition seems to be the one NIST (pdf) published in 2011, and provides a very broad scope. Looking at IT market development the last few years and the potential of what is still to come, we are continually refining these definitions.
As ‘cloud’ products and services pushed to the current market will be common IT practice in a few years,  we will slowly see the ‘cloud’ name being dropped as a result.

There is still a valid argument to have a common definition of ‘cloud’ and the delivery models within companies, to avoid miscommunication between IT and the business. The actual content of that internal definition can be whatever you want it to be, as long as there is a common understanding.

The definition debate in the general IT market will continue until the current hype phase has passed. As soon as we enter Gartner’s “Trough of Disillusionment” all marketing departments will want to move away from the ‘cloud’ term, and replace it with whatever the new hype is. We can already see this happening with the emergence of ‘DevOps’, ‘BigData’, ‘Internet of Things (IoT)’.

Just remember there is just one truth when it comes to ‘cloud’ ; “There is no cloud. It’s just someone else’s computer” 

Continue with reading

Share