💡 My belief in the importance of RAG (retrieval augmented generation) for the future grew stronger after IBM showed some impressive RAG instances. One of the presenters even called it the “biggest productivity winner” among their AI tools.

🔍 Two particular examples:

IBM Deep Search (https://ds4sd.github.io/) is super powerful and there is even a free version.
Seville Scout Advisor: here RAG is applied to athletes. Check out https://lnkd.in/eJ3mz9uS.

📊 Some interesting facts and figures:

  • IBM claims to have saved 1500 hours of manual work by using Generative AI to turn language into code for a customer.
  • They estimate that writing code 70% of the time will be an AI job.
  • At IBM, 60% of dev content is automatically generated.
  • If coding is assisted by AI, there is a 45% reduction in development effort.
  • You can bring your own models (BYOM) into the Watson ecosystem.
  • IBM claims to be “open” to other models, although not as open as open source (if I understand correctly).
  • 90% of online payments still run through COBOL on mainframe.
  • There are still about 230 billion lines of COBOL code active on production.

One cannot forget the importance of AI governance, which consists of three pillars: lifecycle, risk management and regulatory compliance.

A clear distinction was also made between the types of risks in an AI project: regulatory risk, reputational risk and operational risk.

⏱️ I also learned that IBM has its own fundational model (granite-20b-multilingual) composed of a “blue stack” (content validated by IBM) of 6 TB. Unfortunately, Dutch is not yet supported, only English, German, Spanish, French and Portuguese.

💫 IBM has coding assistants for Ansible, Cobol2Java and Java, indicating their focus on dthe importance of mainframe. They postulate that 9 out of 10 payments are eventually processed by COBOL/mainframe systems. And I learned about Jobol, a Cobol to Java translation that is more than just a simple translation.

Want to know more?