It has been more than 7 years since I wrote a blog post titled Information for Technologists Interested in Learning about Artificial Intelligence.
Throughout my career at the intersection of product engineering and news media, I’ve been captivated by the potential of technology to foster innovation. I’ve had the privilege to work with pioneering technologies, and a particular favorite of mine has always been artificial intelligence (AI), specifically Artificial Neural Networks and Deep Learning.
Looking back, I realize that AI technologies, especially Natural Language Processing (NLP), have been part of my work in various media companies since the 1990s, even before their mainstream adoption. This early exposure to AI’s immense potential laid the groundwork for my subsequent endeavors in the field.
In those formative years, my colleagues and I at Knight Ridder were working with multiple AI and machine learning applications. We employed tools such as Netowl Extractor to parse newspaper classified ads for jobs, cars, real estate, rentals, and other categories, extracting structured data to populate searchable relational databases. We also trained software like Autonomy to categorize newspaper articles, integrating machine learning in a practical and productive way. Various AI techniques also found their place in personalization applications. My colleagues, Shannon Brown, Jason Miller, Wayne Weber, and Bob Hucker, were instrumental in these early explorations of AI and machine learning. My colleague Nagraj recently reminded me that a key difference before the advent of LLMs was that we had to train the machine learning systems and thus accuracy was lower than that of modern LLM based approaches.
My journey continued at the Wall Street Journal, where I had the privilege of collaborating with Francesco Marconi, an AI expert and founding CEO of Applied XL. Together, we co-authored an article discussing AI algorithms and journalism, titled “Acing the Algorithmic Beat, Journalism’s Next Frontier”, alongside Till Daldrup. Our collaborative work also highlighted the importance of equipping journalists with AI tools to detect deepfakes, as detailed in an article titled “How The Wall Street Journal is preparing its journalists to detect deepfakes”. These experiences underlined the importance of AI in contemporary journalism, a theme I also addressed in a talk titled “Fighting Fake News with AI and Crowdsourcing” at the 2018 Applause summit.
As the CTO at the New York Times, my team and I harnessed the power of AI algorithms to develop the paper’s personalization features. My colleagues Boris Chen and Daeil Kim did innovative work using AI to deliver business value on both The NY Times’ digital efforts (personalization engine) and print (physical newspaper delivery optimization). I learned valuable knowledge and insights from them. These endeavors further reaffirmed my belief in AI’s transformative potential in the media landscape.
I now work at Hearst where my teammates and I are working on a number of AI applications to improve existing products and create new ones. We are also developing AI to support productivity in our work.
The focus is shifting towards Generative AI and Large Language Models. Many of my friends and colleagues have approached me for simple explanations of these concepts. Hence, this blog post, along with a couple of informative videos, aims to demystify these fascinating technologies.
What is Generative AI?
Generative AI is a subset of artificial intelligence that focuses on creating new data samples based on the patterns and structures it has learned from existing data. In other words, it’s an AI approach that can generate new and original content, such as text, images, music, or even code.
Think of it in this way – imagine having a complex report to write, and instead of starting from a blank document, you have an AI assistant generate a draft based on your requirements and past reports. Then you edit it to create what you want. This is just one example of how Generative AI can improve productivity and support creative work.
Large Language Models
Large Language Models (LLMs) are a type of generative AI that has gained significant attention in recent years. These models excel at understanding and generating human-like text based on the vast amounts of textual data they’ve been trained on. An LLM can generate coherent paragraphs, summarize lengthy articles, translate languages, and even answer questions based on context.
Some well-known LLMs include OpenAI’s GPT-4, Google’s PaLM, and Anthropic’s Claude. These models have demonstrated impressive capabilities and have been widely adopted in various applications, from content generation to customer service chatbots.
Having the privilege to work with AI for decades, I am continually amazed by the advancements in this field. I am convinced that Generative AI and Large Language Models will play an increasingly vital role in the future of business and technology.
Videos Introductions Artificial Intelligence, Generative AI, and Large Language Models
To give you a concise and accessible introduction to these concepts, I recommend watching the following short videos.
These videos are suitable for CEOs, CFOs, general managers, strategy leaders, product managers, and others who want to grasp the fundamental concepts and potential applications of Generative AI and LLMs without delving deep into technical details.
The Future is AI
We’re living in an era where AI is transforming the way we live and work, and understanding these technologies is essential for business leaders. Generative AI and Large Language Models have immense potential, and as they continue to evolve, I’m excited to see how they’ll shape the future.
As someone who has been on the front lines of AI integration in the media industry for decades, I can attest to the transformative power of these technologies.
I hope this blog post and the accompanying videos help you better understand these concepts. Please feel free to reach out to me if you have any questions or if you’d like to discuss further.
In the spirit of continuous learning and collaboration, in my personal time, I have been working on Ragbot.AI, a personalized open-source AI assistant and chatbot, as a part of my AI-augmented brain project. This tool aims to leverage advanced AI technologies to provide a highly personalized digital assistant. Feel free to explore this project on the Ragbot.AI GitHub repository and my blog post.