Category Archives: Virtualization

Are we in the hype phase of AI?

The entire tech industry has embraced the “AI” label in the past few months, but how real are the offerings in the marketplace today, and who will reap the benefits of these AI functions and capabilities in many of the tech tools we all use?

AI, ML, LLM and related terms have been emerging in many different areas of tech for the past few years. At Oracle for Research, we funded a lot of AI projects – including use of AI to triage accident victims based on X ray images of long bone fractures, use of ML to interpret three dimensional posture analysis based on the inputs from a smart watch (trained on exercise videos on YouTube), AI assisted molecular modeling for drug screening; and a project for which I was proud to be a co-author on a conference presentation using AI to map agricultural land use in Nigeria from satellite photos. In fact, we sponsored so many AI and ML workloads that I had a weekly meeting with the GPU team to determine where in the world was best to run these workloads to minimize impacts on paying customers.

It’s clear that the impacts of AI and ML in many enterprise systems will be large and I see Microsoft, Apple, Oracle, Google, and others making enormous investments to add these capabilities to consumer and enterprise products. This afternoon I was able to take a photo of a plant in my garden, and the ML integration with the iPhone camera was able to tell me immediately what the pant was and gave me a set of informational links on how best to care for it.

I’ve been using ChatGPT for help on scripting and coding too – it’s great at suggesting R and Bash prompts based on what I have already done – and then I can test whether it’s correct in RStudio immediately. The success rate is not 100%, but it’s pretty good – and more efficient (although probably not as good for my learning) than the countless google searches for suggestions I would have otherwise used.

Realistically, though, how is AI going to impact most of the businesses and organizations that I have spent the past 20 years working with around the world? AI and ML might transform how things are done in Palo Alto, Seattle, Austin, and Cambridge but are they really going to make a big difference for that international steel distributor I worked with? The one that had 35 different ERP systems with no shared data model, data dictionary, or documented processes (and yet was still a billion dollar company). Or the truck parts manufacturer in Indiana with facilities in five countries who didn’t use cloud resources because they weren’t sure if it was a fad? How about the US Federal department that oversees a substantial part of the GDP of the nation – where their managers vaguely waved their arms about “AI” transforming their (non-documented) processes. How, I asked, were they going to train models when they didn’t actually collect data on processes and performance today?

I don’t mean to be a downer, and I think the capabilities of AI and ML can, and will, transform many aspects of our lives but I do worry that most of the people who are the technology’s biggest advocates have no idea how exactly the vast majority of their users (organizations and end-users) work day to day. Most companies and organizations in North America, Europe, and APAC haven’t even mastered and deployed search yet. Employees spend substantial parts of their work weeks looking for things that exist – and many of the largest tech firms are in this situation, not just mom and pop businesses.

The process of transforming most organizations and enterprises around the world to data driven practices – which will then provide data that can be used to train models – is still underway and has been for many years. The general purpose LLMs will be great for fettling language in press releases, and the pattern matching models will be great for sorting and tagging my photos, but true, transformative change to the way that organizations work based on AI insights tailored to their specific needs and trained on their data will be much further away.

Why I changed my mind about the cloud

I was very skeptical about cloud deployments for quite a while. I had seen the failed promise of application service providers (ASPs) and virtual desktops in the late 1990s and early 2000s and was very cautious about committing our company’s or our clients’ most sensitive data to “computers that belong to someone else”.

What changed my mind? I think it was primarily security and management and I remember being at an AIIM meeting in NYC (at the Hotel Pennsylvania, across 7th from Penn Station and MSG) and the speaker asking people if they thought their own security people were as good as those that Amazon and Microsoft could attract. Like all good scientists, I knew to re-examine my assumptions and conclusions when faced with new data and that comment really resonated with me.

I thought about where the vulnerabilities and issues were with self-hosted systems. How their ongoing stability often relied on heroic efforts from overworked and underpaid people. How I had started my tech career at a 2000-era dotcom and had been the manager of the team desperately trying to scale for growth, manage security and also fix email and phone issues in the office. I remembered the ops manager at doubleclick (when they were based at the original skyrink building in Chelsea) telling me how they treated their commodity servers to reboot after an error, then a reimage, then straight to the dumpster if that didn’t fix it – the earliest instance I had come across of treating servers “like cattle not pets”.

Over time, my thinking changed and I now think that cloud server deployment is the best solution for almost all use cases. We’ve deployed complete cloud solutions for ministry clients in NZ on private cloud engineered systems and on government cloud virtual servers. TEAM IM moved all of our internal systems to the cloud and gave up our data center 6 or 7 years ago – now everything is Azure, AWS, or Oracle Cloud.

Is it right for everyone? No; here are some examples I’ve encountered where it is not:

  • Insurance client that does 40+ data validations against internal (AS400) systems with every process
  • National security client managing extremely secure archival data in house (although that may change in the future)
  • Oil exploration company deploying to remote sites with very limited bandwidth (although we did some backend sync nightly).

But for most of you? Can you hire better engineers and security staff than Microsoft or Amazon? Can you afford to deploy servers around the world in different data centers? Can you afford to have additional compute and storage capacity sitting in racks ready to go? Do you operate in an environment where connectivity is ubiquitous and (relatively) cheap and fast?

Rethink your assumptions and biases. Change your mind when presented with new data. Make the best decision for your organization or clients. Good luck!

Alfresco integration with Salesforce

Back to meat and potatoes – or their vegetarian equivalent in my case.

We are working with a client to deploy Alfresco One as a content and records management platform for their business.  An important requirement is that we be able to integrate with Salesforce as that’s where their contracts are currently stored as attachments and where their workflow exists.  During the scoping process we knew that Alfresco had created a Salesforce integration app that was available on AppExchange.

However, there are some limitations and “gotchas” that are good to know about  when designing a solution around this integration.

  1. The integration is only supported for my.alfresco hybrid cloud integration.  This is driven by Salesforce’s security requirements.  If you have an on-prem or hosted Alfresco installation you will need to synchronize with the cloud extranet.
  2. The integration is really designed to be initiated from the Alfresco end rather than (as in our case) putting attachments from Salesforce into Alfresco.  The developers at Alfresco have been very helpful in giving us guidance on how to work with this, but understanding this “normal flow” would have helped us earlier in the process. Learn from my mistake!
  3. All the content from Salesforce is put into a single “attachments” folder in a single site. However, if the SF record has an account record as parent record it becomes the root for that structure and then each object becomes a child of that folder.  For example: Attachments ->ClientA->OpportunityZ                                       Attachments ->ClientB->CaseY
  4. You can use Alfresco rules to move content around if it makes better sense in your existing organization because nodes are tracked no matter where the files are moved to.
  5. All the content in the SF site will have common security, so you will have to assign security to content.  Again, the integration is built from the PoV that content is initiated in Alfresco, synced to the cloud, and from there to SF. If you are reversing that flow, things become WAY more complex.
  6. The current release of the Alfresco integration app only supports a default set of metadata for Accounts, Opportunities, Contracts, and Cases – these need to be mapped to Alfresco properties. However, we hear that there may be support for custom metadata in the next release.

Overall the integration is great if you are following the use case it was designed to address.  The documentation is good, installation is easy, and the developers have been helpful and responsive to questions. But we may need to look at other ways to extract the existing content and populate our Alfresco repository.  I’m currently looking at Data Loader as a tool to extract existing objects for import into the Alfresco instance.

(Thanks to Jon Chartrand, Jared Ottley, and Greg Melahn for their help in gaining this insight – all mistakes are mine)

WebCenter on Exalogic and Exadata

There’s currently a lot of interest in moving virtualized environments to Oracle’s engineered systems.  This is partly because they are good systems and, for organizations that can use their capabilities, provide good value for money and high performance. Partly because Oracle licensing makes it tough to virtualize cost-effectively on other platforms (looking at you, VMware). And partly because Oracle sales people are extremely motivated to sell hardware along with software.

Unfortunately, though, there is still a lot of confusion about how this might impact deployment of WebCenter on these engineered systems.  Here are a few scenarios you may come across and how to deal with them.

  • Exadata (or Database Appliance) – no impact at all from an installation point of view.  The database is still just a database from the application’s point of view and will continue to connect via jdbc.
  • Exalogic with native OEL – this is a rare configuration, but Exalogic does support install of OEL natively on compute nodes.  In this case there is no difference to installing on any other Linux OS.  Assume (and ensure) networking is handled by the Exalogic administrator because that is where the issues may arise.
  • Exalogic with virtualized compute nodes – the most common deployment.  Thestandard/supported approach is to install all the WebCenter components on virtual OEL servers as usual.  Installation of WebLogic and WebCenter on Elastic Cloud (Exalogic) is exactly the same as on a regular server. Networking can be challenging when configuring virtual environments on Exalogic, so be sure that is all worked out ahead of time. Domain configuration and data stores should be on the ZFS storage appliance.

A major value add for Exalogic is the optimization for WebLogic that is designed into the system.  All of these optimizations have to be configured on a domain or server basis, though, they are not OOTB. This is a good resource for working through the optimizations.