Software is hard

Most people who know me know that I spent large parts of my twenties and early thirties in various Arctic and Alpine locations around the world. If it’s cold and remote (and not Russia) there’s a good chance I’ve been there. Life in these areas often involves helicopters, small aircraft, snow machines, power equipment, shotguns, 30-06 rifles, chainsaws, and other noisy things.

Which means that today I am blessed with significantly damaged hearing along with my other superpower of being able to predict upcoming rain and snow. Last year I finally got my hearing tested and decided to address the issue (thanks to Kim and Steve D for the push). Over the counter hearing aids have only recently been available in the US and I did all the research and took the plunge first with Eargo 7s and then with Eargo SEs after I was unable to get a good fit with the former.

Which leads to my point – making in-ear medical devices is hard and I am in awe of the engineering that has gone into these. But devices like this require integration with a tuning app and ongoing fixes and updates (like all software / hardware couplings). Here’s where things get tough – I’m currently on my third set of Eargos and each time the issue has been that updates have failed and basically bricked the hardware or prevented it from working as designed.

I think in many ways it’s a similar situation to the software and OTA update capabilities in modern cars. Renting cars has shown me the many different ways that manufacturers can break and degrade Apple Car Play, and manufacturers’ own UIs are almost universally worse than Apple or Google’s. Special shout out to the awful Mercedes MBUX which apparently has the ability to integrate with CarPlay but hides that so efficiently that I have only ever found it by accident.

I am very curious to try out Apple’s new AirPods Pro 2 hearing health enhancements because I believe that Apple have the ability to manage software and updates effectively – something that medical hardware and vehicle manufacturers could learn from.

What is a “Growth Mindset”?

You’ve probably heard a lot of individuals and organizations referring to “growth mindset” in the past few years, but what d they all mean by this phrase? The Harvard Business Review actually talked about the concept back in 2016 and the concept dates back to psychological research done in the 1970s by Carol Dweck which led to a book (Mindset, The New Psychology of Success: How We Can Learn to Fulfill our Potential) in 2006. The concept migrated from academia, education and psychology into business and personal development circles and has expanded and morphed since then.

Ceramic image from the Subte (underground / subway) in Buenos Aires, Argentina

I’ve been researching new opportunities at Microsoft recently and was struck by the central place “growth mindset” has in their culture. Which led me to think a little more about what is meant by the concept. In most cases growth mindset is contrasted with a fixed mindset and the need for curiosity, learning, and evolution is stressed. But it seems to me that growth mindset applies differently at different scales.

Organizational – in its simplest (and IMHO incorrect) application, growth mindset for an organization applies to growing the bottom line, growing headcount, and growing profit. But as we get what we measure – organizations that emphasize growth at all costs may see an increase in bottom line but a decrease in margin or profit; an increase in headcount may lead to massive fixed costs and slower execution (reread the Mythical Man Month for reference); and focus on growth over all leads to social and environmental impacts.

More subtly, though, a growth mindset within an organization speaks to an open-ness to learn from history and mistakes, an evolutionary focus that allows for the core mission to change over time to adapt to external forces. In order to facilitate this, there needs to be less of a focus on rigid hierarchies and processes and more focus on clear communication across levels and silos in that organization. This is hard to do in regulated and bureaucratic organizations, but no less necessary despite the challenges.

Team – at the team level, a growth mindset has the same goals, but overlaps with the individual aspirations of each person. I often think of the apocryphal complaint from a manager “what if we train people and they leave?”, to which the response was “what if we *don’t* train people and they stay?”. Team leaders need to encourage growth and evolution in their teams and model individual behavior and personal growth. As with the organizational level, this includes evolution of structures, teams, and processes, and the ability to learn from mistakes and trends and adapt to them.

As a leader, I’ve always felt it vital to coach and encourage colleagues to learn and grow and evolve. And if that leads them to other teams or other careers, that is long term gain for short term pain. It’s always better to lose one person to another team or employer and retain many others who see that they are encouraged to grow and progress.

Individual – I think growth mindset has both personal and professional aspects, but I strongly believe it’s hard to separate the two. I strive to end each day having learned something new, challenged myself in some way, and improved some aspect of my life. That might be as small as completing a crossword, reading articles or fiction (a whole other topic), taking a language lesson, or working out.

Time ticks for all of us, but I don’t think we should stay set in our ways and accept that life and capabilities slow down. You can always improve – whether it’s learning another skill at work, improving your skiing or ocean swimming, or simply challenging your brain through reading, learning, and thinking.

As a geologist, I definitely have issues with the concept of Evolve or Die (looking at you horseshoe crabs, sturgeon, Greenland sharks, and Ginkgo trees), but as a metaphor I like it. Grow. Learn. Improve. Adapt.

What do we mean by “Artificial Intelligence”?

The future is unwritten.

Original Joe strummer mural on the wall of the Niagara Bar in the East Village, NYC.  Memorializes Joe Strummer (1952-2002) and quotes "the future is unwritten" and "know your rights"

Following on from my last post, and triggered by conversations I have had on the subject since then, it occurred to me that a lot of the confusion around “AI” is that almost everyone has a different understanding of the term. Which, of course, makes serious assessment of the subject difficult.

So, let’s define some parameters:

Broadly speaking, Artificial Intelligence is the ability of machines (computers) to simulate the processes usually associated with cognition or human intelligence. This is where the famous Turing Test comes into play – can a machine/computer respond to questions in such a way that the interrogator is unaware that the other party is not human?

However, a broader definition of AI encompasses the abilities to “learn, read, write, create, and analyze”. I think this is more valuable in terms of scope, because it is closer the common understanding of what is popularly termed “AI” today. So let’s break those tasks down a little:

  • Learn – machine learning (ML) is a subset of artificial intelligence. All ML is AI, but not all AI is ML (although most use it). ML is (broadly) statistical analysis on steroids – calculating and weighing patterns and relationships in large data sets. You need to input good data and you need to train the model on that existing data, and then test and refine against other subsets of the data. ML is great at pattern recognition within data and for images and text but it is very susceptible to the correlation = causation fallacy.
  • Read – machines don’t “read”, they have data input to them. However, in this sense, the task refers to ingesting large amounts of submitted content and breaking it down into sections, paragraphs, etc., discarding filler words or data noise, calculating relationships, tracking usage frequencies, etc. This capability is mature because it lies behind full text indexing and search which has been around for decades, but it is still far from perfect.
  • Write – again, machines don’t “write” but they can create somewhat novel assemblages of text (or images) based on statistical rules derived from their input data. This may be a simulacrum of human writing or it may be a word salad (or visual equivalent). Chat GPT and Claude AI are large language models (LLM) that output text based on input prompts and very large data sets based on analyses of a huge corpus of training information. This is where a lot of the hype around “AI” has been focussed in the past 6-9 months.
  • Create – creation overlaps with “write”. The models can present novel output based on their training data and rule sets but is this “creation”? That’s an epistemological discussion that I’m not qualified to judge, but I would point out that while machines can (and do) find relationships between data that humans have not, they are constrained by their training data and cannot “create” anything that has not been submitted to them as input. They can, and do, create new things from old components but currently there is no way for them to create something wholly original.
  • Analyze – this is the part that machines are really good at; and the area where I believe the greatest strides will be made. Humans have been wonderful at collecting data over the past millennia, but there are limits to the ability to retain enough to be able to draw interdisciplinary conclusions. It has been claimed that Sir Isaac Newton in the late 1600s and early 1700s was the last polymath able to be conversant in all aspects of human knowledge, and even then that was probably an exaggeration. Today we generate data way faster than anyone or any organization can track and AI will certainly help fund relationships between disparate aspects of human knowledge. Of course, this is where hubris creeps in – for instance, will we generate more CO2 from running massive GPU stacks and data stores trying to solve climate change? Will all the assembled data of human knowledge be used to manipulate and sell people things?

So, to return to the original question – what is AI? It’s a term that encompasses machine learning, large language models, advanced statistics, novel data collection and organization, natural language processing, and many other tools, approaches, and capabilities. I don’t think it’s productive to buy, sell, worry about, or legislate AI without being more precise in your terms.

  • Will “arm-wavy” AI solve all my business or science problems? No, it will not, but machine learning, natural language processing, and analysis of your internal documentation may provide actionable insights.
  • Will AI cure cancer or solve the climate crisis? No it will not, but the tools that are part of AI may generate novel approaches for research that have been overlooked in the past which could lead to these breakthroughs.
  • Will AI replace my job? In the short and medium term it is possible that some jobs will be replaced by AI processes, but care and feeding of those models will also generate new jobs. Of course, as is so often the case, the skill profiles of the replaced and replacees will be quite different, so this does merit public discussion.
  • Will AI make Skynet1 self aware and lead to the creation of killer robots that can travel back through time to destroy humanity’s last hope? Well, that depends on whether we let Cyberdyne Systems drive our defense allocations – that’s definitely a public policy question.

NOTE: the picture is of the original and best Joe Strummer memorial mural on the wall of the Niagara bar at 7th and A in the East Village. It was painted by Dr Revolt in 2003. It was unforgivably removed and replaced by a “cleaner” version in 2013 after the bar was renovated. Same artist, different vibe.

The full quote from Joe is “(a)nd so now I’d like to say – people can change anything they want to. And that means everything in the world. People are running about following their little tracks – I am one of them. But we’ve all got to stop just following our own little mouse trail. People can do anything – this is something that I’m beginning to learn. People are out there doing bad things to each other. That’s because they’ve been dehumanised. It’s time to take the humanity back into the center of the ring and follow that for a time. Greed, it ain’t going anywhere. They should have that in a big billboard across Times Square. Without people you’re nothing. That’s my spiel. The future is unwritten

  1. Can we talk about the NSA making a surveillance program after the Terminator antagonist? Is this horribly tone-deaf or is it some kind of inside joke? ↩︎

Are we in the hype phase of AI?

The entire tech industry has embraced the “AI” label in the past few months, but how real are the offerings in the marketplace today, and who will reap the benefits of these AI functions and capabilities in many of the tech tools we all use?

AI, ML, LLM and related terms have been emerging in many different areas of tech for the past few years. At Oracle for Research, we funded a lot of AI projects – including use of AI to triage accident victims based on X ray images of long bone fractures, use of ML to interpret three dimensional posture analysis based on the inputs from a smart watch (trained on exercise videos on YouTube), AI assisted molecular modeling for drug screening; and a project for which I was proud to be a co-author on a conference presentation using AI to map agricultural land use in Nigeria from satellite photos. In fact, we sponsored so many AI and ML workloads that I had a weekly meeting with the GPU team to determine where in the world was best to run these workloads to minimize impacts on paying customers.

It’s clear that the impacts of AI and ML in many enterprise systems will be large and I see Microsoft, Apple, Oracle, Google, and others making enormous investments to add these capabilities to consumer and enterprise products. This afternoon I was able to take a photo of a plant in my garden, and the ML integration with the iPhone camera was able to tell me immediately what the pant was and gave me a set of informational links on how best to care for it.

I’ve been using ChatGPT for help on scripting and coding too – it’s great at suggesting R and Bash prompts based on what I have already done – and then I can test whether it’s correct in RStudio immediately. The success rate is not 100%, but it’s pretty good – and more efficient (although probably not as good for my learning) than the countless google searches for suggestions I would have otherwise used.

Realistically, though, how is AI going to impact most of the businesses and organizations that I have spent the past 20 years working with around the world? AI and ML might transform how things are done in Palo Alto, Seattle, Austin, and Cambridge but are they really going to make a big difference for that international steel distributor I worked with? The one that had 35 different ERP systems with no shared data model, data dictionary, or documented processes (and yet was still a billion dollar company). Or the truck parts manufacturer in Indiana with facilities in five countries who didn’t use cloud resources because they weren’t sure if it was a fad? How about the US Federal department that oversees a substantial part of the GDP of the nation – where their managers vaguely waved their arms about “AI” transforming their (non-documented) processes. How, I asked, were they going to train models when they didn’t actually collect data on processes and performance today?

I don’t mean to be a downer, and I think the capabilities of AI and ML can, and will, transform many aspects of our lives but I do worry that most of the people who are the technology’s biggest advocates have no idea how exactly the vast majority of their users (organizations and end-users) work day to day. Most companies and organizations in North America, Europe, and APAC haven’t even mastered and deployed search yet. Employees spend substantial parts of their work weeks looking for things that exist – and many of the largest tech firms are in this situation, not just mom and pop businesses.

The process of transforming most organizations and enterprises around the world to data driven practices – which will then provide data that can be used to train models – is still underway and has been for many years. The general purpose LLMs will be great for fettling language in press releases, and the pattern matching models will be great for sorting and tagging my photos, but true, transformative change to the way that organizations work based on AI insights tailored to their specific needs and trained on their data will be much further away.

Why I changed my mind about the cloud

I was very skeptical about cloud deployments for quite a while. I had seen the failed promise of application service providers (ASPs) and virtual desktops in the late 1990s and early 2000s and was very cautious about committing our company’s or our clients’ most sensitive data to “computers that belong to someone else”.

What changed my mind? I think it was primarily security and management and I remember being at an AIIM meeting in NYC (at the Hotel Pennsylvania, across 7th from Penn Station and MSG) and the speaker asking people if they thought their own security people were as good as those that Amazon and Microsoft could attract. Like all good scientists, I knew to re-examine my assumptions and conclusions when faced with new data and that comment really resonated with me.

I thought about where the vulnerabilities and issues were with self-hosted systems. How their ongoing stability often relied on heroic efforts from overworked and underpaid people. How I had started my tech career at a 2000-era dotcom and had been the manager of the team desperately trying to scale for growth, manage security and also fix email and phone issues in the office. I remembered the ops manager at doubleclick (when they were based at the original skyrink building in Chelsea) telling me how they treated their commodity servers to reboot after an error, then a reimage, then straight to the dumpster if that didn’t fix it – the earliest instance I had come across of treating servers “like cattle not pets”.

Over time, my thinking changed and I now think that cloud server deployment is the best solution for almost all use cases. We’ve deployed complete cloud solutions for ministry clients in NZ on private cloud engineered systems and on government cloud virtual servers. TEAM IM moved all of our internal systems to the cloud and gave up our data center 6 or 7 years ago – now everything is Azure, AWS, or Oracle Cloud.

Is it right for everyone? No; here are some examples I’ve encountered where it is not:

  • Insurance client that does 40+ data validations against internal (AS400) systems with every process
  • National security client managing extremely secure archival data in house (although that may change in the future)
  • Oil exploration company deploying to remote sites with very limited bandwidth (although we did some backend sync nightly).

But for most of you? Can you hire better engineers and security staff than Microsoft or Amazon? Can you afford to deploy servers around the world in different data centers? Can you afford to have additional compute and storage capacity sitting in racks ready to go? Do you operate in an environment where connectivity is ubiquitous and (relatively) cheap and fast?

Rethink your assumptions and biases. Change your mind when presented with new data. Make the best decision for your organization or clients. Good luck!

Bluetooth woes

After spending the best part of 4 hours trying to get Bluetooth to work on my Dell Win 7 laptop, I gave up and went to Best Buy and bought a MS mouse that uses some proprietary protocol.  I have shit to do and I can’t spend hours trying to track down why Bluetooth stopped working at some point in the past week following the usual 40 or so updates.

I was initially going to complain about Microsoft’s piss-poor implementation of BT, but then I remembered that while the BT mouse works fine on my mac, the BT audio to my 30-pin iPod dock has become so lossy and unreliable to be unusable.  So neither Microsoft or Apple can apparently deliver basic functionality and reliability to Bluetooth – which may explain why momentum is dying.

The only thing that “just works” is BT pairing between cars and phones and between phones and hands-free headsets, so perhaps that’s where this promising technology is going to remain.

Aether Apparel Highline jacket review

I first saw this jacket and tried it at the Aether Apparel store on Crosby in SoHo.  It was last spring and because it was so late in the season they didn’t have the size and colour I wanted.  I’d been hoping there would be a summer sale where I could pick this up at a discount.  Unfortunately that didn’t happen so I paid full price in late September (and then, of course, there was a Black Friday sale, but I was already using the jacket in Newfoundland when that happened)

1 copy

Cost is definitely a barrier for the jacket as it goes for $550, but I believe it’s decent value given the quality of materials, workmanship, and design.  It’s been my go-to jacket through the late fall and early winter and because I tend to keep quality gear for a long time, I’m happy.

2 copy

The key differentiator for this jacket is that it has a much slimmer, more “urban” style than most down or primaloft jackets.  It looks more like the quilted Barbour jackets that everyone wears in Italy in the winter, but has the functionality of most technical down jackets.  The outer material is Schoeller microfiber which is a big plus for me because I’ve been very happy with other items using Schoeller fabrics in the past.

Overall it’s been warm enough for use into the 20’s F (-6º C) and I expect it will still be practical for 10-15º cooler.  Water resistance is good although it’s not a rain jacket.  My only observation / complaint is that the water repellent treatment on the lower sleeves has worn after 3 months use.

Fit is great for tall people like me.  The body is long, and I love that it’s longer at the back.  The sleeves are also a perfect length for me – I wear 35″ in dress shirts. One of my favorite touches is that the inner lining on the grey jacket is bright red.  I also really like the multiple pockets on the chest and hips.  One of the inner chest pockets also has headphone cord routing.  There are many other great small details like this.

You do pay a premium for the style compared to a similar jackets from, say, Patagonia, Marmot or Arc’teryx but this is definitely a tough, warm, dry technical garment on a par with offerings from those brands.  If style is also something to think about – i.e. you don’t want to look like you just got back from skiing or ice-climbing when you are in the city – then I would certainly recommend this.

(Images above are grabbed from the Aether Apparel web site – all rights are theirs.)

Apple “just works”?

Marco started the conversation with his posting questioning whether Apple had lost the plot – an article which he now says he wishes he hadn’t posted.  I can see why he would rethink the language and tone of the piece, but he does raise an important point that the quality of software execution at Apple has been markedly poorer in the last 1-2 years.

I’ve been an Apple user since 1988 and shareholder since 2000.  I sold most of the shares I bought at $15 in late 2000 when the stock split and then hit $100 in 2007; it covered most of the downpayment on my apartment (may have been a poor choice in retrospect, but I needed a place to live).  Historically Apple didn’t release major OS updates very frequently and that frequency of release has accelerated since Lion in 2010.  It’s clear to anyone who pays attention that software quality has been problematic since then and is getting worse.

  • iTunes has major issues that haven’t been addressed for years
  • Yosemite had major functional problems in the initial release, and many serious OS X users have still not upgraded because of this (including me)
  • Apple Mail is outdated, inflexible, and barely functional
  • User security for iCloud is terrible and risks damaging Apple’s reputation altogether.

I could go on.

Five years ago I would have recommended OS X and iOS to friends and relatives because things were simpler and easier to use.  The hardware is higher quality and the integration between devices is still better than the other options, but this is mainly because the other options are so terrible. Microsoft lost the plot with Windows 8 and I almost never see it in the wild. Desktop linux is still reserved for enthusiasts and is still not an option for most users. I spend too much time in the work day wrestling with linux and solaris servers, I don’t need that for a desktop platform.

Apple is still my OS of choice but I worry that they really need to improve their software development and release process.  This probably means slowing major releases to 18 or 24 month intervals, but who would complain about that?

 

Update and clarification on my WinMo phone posting

Here

That last blog posting generated the most traffic of anything I have ever posted here.  I certainly did not intend to enter the religious wars of the middle ages.

People certainly are invested with their choice of mobile device OS.  I didn’t think of that, because I don’t feel strongly either way.  I like my iPad and iPhone, but I don’t think it would be a great hardship to switch platforms (apart from the learning curve and repurchasing costs). I certainly don’t identify myself by those choices.

Happy New Year!