All

Retrieval Augmented Generation using Cohere Command model through Amazon Bedrock and domain data in Elasticsearch
Retrieval Augmented Generation using Cohere Command model through Amazon Bedrock and domain data in Elasticsearch

Domain Specific Generative AI: Pre-Training, Fine-Tuning, and RAG
There are a number of strategies to add domain specific knowledge to large language models (LLMs) such as pre-training, fine-tuning Models and using Retrieval Augmented Generation (RAG).

Vector Similarity Computations FMA-style
Use of FMA within vector similarity computations in Lucene

Vector Search (kNN) Implementation Guide - API Edition
Follow along with code examples and a Jupyter notebook to quickly get up and running with kNN vector search in Elasticsearch

Chunking Large Documents via Ingest pipelines plus nested vectors equals easy passage search
In this post we'll show how to easily ingest large documents and break them up into sentences via an ingest pipeline so that they can be text embedded along with nested vector support for searching large documents semantically. Generated image of a chonker.

Retrieval Augmented Generation (RAG)
What is Retrieval Augmented Generation (RAG) and how the technique can help improve the quality of an LLM's generated responses, by providing relevant source knowledge as context.

Introducing Scalar Quantization in Lucene
How did we introduce scalar quantization into Lucene

Finding your puppy with Image Search
Have you ever been in a situation where you found a lost puppy on the street and didn’t know if it had an owner? Learn how to do it with vector search or image search.

Using hybrid search for gopher hunting with Elasticsearch and Go
Just like animals and programming languages, search has undergone an evolution of different practices that can be difficult to pick between. In the final blog of this series, Carly Richmond and Laurent Saint-Félix combine keyword and vector search to hunt for gophers in Elasticsearch using the Go client.

Finding gophers with vector search in Elasticsearch and Go
Just like animals and programming languages, search has undergone an evolution of different practices that can be difficult to pick between. Join us on part two of our journey hunting gophers in Go with vector search in Elasticsearch.

Elasticsearch as a GenAI Caching Layer
Explore how integrating Elasticsearch as a caching layer optimizes Generative AI performance by reducing token costs and response times, demonstrated through real-world testing and practical implementations.

Go-ing gopher hunting with Elasticsearch and Go
Just like animals and programming languages, search has undergone an evolution of different practices that can be difficult to pick between. Join us as we use Go to hunt for gophers in Elasticsearch using traditional keyword search.

Scalar quantization 101
What is scalar quantization and how does it work?

Use Amazon Bedrock with Elasticsearch and Langchain
Learn to split fictional workplace documents into passages, transform these passages into embeddings in Elasticsearch and integrate Amazon Bedrock LLM.

Improving information retrieval in the Elastic Stack: Improved inference performance with ELSER v2
Learn about the improvements we've made to the inference performance of ELSER v2.

Improving information retrieval in the Elastic Stack: Optimizing retrieval with ELSER v2
Learn about how we're reducing retrieval costs for ELSER v2.

Less merging and faster ingestion in Elasticsearch 8.11
Elasticsearch 8.11 improves how it manages its indexing buffer, resulting in less segment merging.

How to create customized connectors for Elasticsearch
Learn how to create customized connectors for Elasticsearch to simplify your data ingestion process.

Lexical and Semantic Search with Elasticsearch
In this blog post, you will explore various approaches to retrieving information using Elasticsearch, focusing specifically on text: lexical and semantic search.

Generative AI architectures with transformers explained from the ground up
This long-form article explains how generative AI works, from the ground all the way up to generative transformer architectures with a focus on intuitions.

Multilingual vector search with the E5 embedding model
In this post we'll introduce multilingual vector search. We'll use the Microsoft E5 multilingual embedding model, which has state-of-the-art performance in zero-shot and multilingual settings. We'll walk through how multilingual embeddings work in general and then how to use E5 in Elasticsearch.

Bringing Maximum-Inner-Product into Lucene
How we brought maximum-inner-product into Lucene

Adding passage vector search to Lucene
Discover how we added passage vectors to Apache Lucene, the benefits of doing so, and how existing Lucene structures were used to create an efficient retrieval experience.

Demystifying ChatGPT: Different methods for building AI search
In this blog, we look at how ChatGPT works, and consider three approaches to build generative AI like search experiences for specific domains.

Retrieval vs. poison — Fighting AI supply chain attacks
In this post, learn about the supply chain vulnerabilities of artificial intelligence large language models and how the AI retrieval techniques of search engines can be used to fight misinformation and intentional tampering of AI.

Generative AI using Elastic and Amazon SageMaker JumpStart
Learn how to build a GAI solution by exploring Amazon SageMaker JumpStart, Elastic, and Hugging Face open source LLMs using the sample implementation provided in this post and a data set relevant to your business.

Vector search in Elasticsearch: The rationale behind the design
There are different ways to implement a vector database, which have different trade-offs. In this blog, you'll learn more about how vector search has been integrated into Elastisearch and the trade-offs that we made.

Relativity uses Elasticsearch and Azure OpenAI to build futuristic search experiences, today
Elasticsearch Relevance Engine is a set of tools for developers to build AI-powered search applications. Relativity, the eDiscovery and legal search tech company, is building next-generation search experience with Elastic and Microsoft Azure Open AI.

How to get the best of lexical and AI-powered search with Elastic’s vector database
Elastic has all you should expect from a vector database — and much more! You get the best of both worlds: traditional lexical and AI-powered search, including semantic search out of the box with Elastic’s novel Learned Sparse Encoder model.
.png&w=3840&q=100)
Open-sourcing sysgrok — An AI assistant for analyzing, understanding, and optimizing systems
Sysgrok is an experimental proof-of-concept, intended to demonstrate how LLMs can be used to help SWEs and SREs understand systems, debug issues, and optimize performance.

The generative AI societal shift
Learn how Elastic is at the forefront of the Large Language Models revolution –– helping users take LLMs to new heights by providing real-time information and integrating LLMs into search, observability, and security systems for data analysis.
.jpg&w=3840&q=100)
Logs: Understanding TLS errors with ESRE and generative AI
This blog presents a novel application of the Elasticsearch Relevance Engine (ESRE) with its Elastic Learned Sparse Encoder capability, specifically in log analysis.

ChatGPT and Elasticsearch: APM instrumentation, performance, and cost analysis
In this blog, we'll instrument a Python application that uses OpenAI and analyze its performance, as well as the cost to run the application. Using the data gathered from the application, we will also show how to integrate LLMs into your application.

ChatGPT and Elasticsearch: Faceting, filtering, and more context
By providing tools like ChatGPT additional context, you can increase the likelihood of obtaining more accurate results. See how Elasticsearch's faceting and filtering framework can allow users to refine their search and reduce costs.

ChatGPT and Elasticsearch: OpenAI meets private data
Explore the integration of Elasticsearch's search relevance capability with ChatGPT's question-answering capability to enhance your domain-specific knowledge base. Learn how to harness ChatGPT to enrich your information repository like never before!

ChatGPT and Elasticsearch: A plugin to use ChatGPT with your Elastic data
Learn how to implement a plugin and enable ChatGPT users to extend ChatGPT with any content indexed in Elasticsearch, using the Elastic documentation.

Enhancing chatbot capabilities with NLP and vector search in Elasticsearch
In this blog post, we will explore how vector search and NLP work to enhance chatbot capabilities and demonstrate how Elasticsearch facilitates the process. Let's begin with a brief overview of vector search.

Unlocking the potential of large language models: Elastic's first code contribution to LangChain
In this blog, we explore the exciting synergy between Langchain and Elasticsearch, two powerful tools transforming the landscape of large language models. We provide an overview of the collaboration and its potential to shape application development.
.png&w=3840&q=100)
Introducing Elasticsearch Relevance Engine™ — Advanced search for the AI revolution
Elasticsearch Relevance Engine™ (ESRE) powers generative AI solutions for private data sets with a vector database and machine learning models for semantic search that bring increased relevance to more search application developers.

Improving information retrieval in the Elastic Stack: Introducing Elastic Learned Sparse Encoder, our new retrieval model
Deep learning has transformed how people retrieve information. We've created a retrieval model that works with a variety of text with streamlined processes to deploy it. Learn about the model's performance, its architecture, and how it was trained.

Accessing machine learning models in Elastic
Bring your own transformer models into Elastic to use optimized embedding models and NLP, or integrate with third-party transformer modes such as OpenAI GPT-4 via APIs to leverage more accurate, business-specific content based on private data stores.

Introducing Elastic Learned Sparse Encoder: Elastic’s AI model for semantic search
Elastic Learned Sparse Encoder is an AI model for high relevance semantic search across domains. As a sparse vector model, it expands the query with terms that don't exist in the query itself, delivering superior relevance without domain adaptation.

Monitor OpenAI API and GPT models with OpenTelemetry and Elastic
Get ready to be blown away by this game-changing approach to monitoring cutting-edge ChatGPT applications! As the ChatGPT phenomenon takes the world by storm, it's time to supercharge your monitoring game with OpenTelemetry and Elastic Observability.

Privacy-first AI search using LangChain and Elasticsearch
The world of search is changing very quickly. ChatGPT has cemented generative AI's place in making finding data faster. We'll use Elasticsearch and LangChain to build a private trivia bot on fun Star Wars trivia data.

How to deploy NLP: Text Embeddings and Vector Search
Taking Text Embeddings and Vector Similarity Search as the example task, this blog describes the process for getting up and running using deep learning models for Natural Language Processing, and demonstrates vector search capability in Elasticsearch

Stateless — your new state of find with Elasticsearch
Discover this future of stateless Elasticsearch. Learn how we’re investing in building a new fully cloud native architecture to push the boundaries of scale and speed.

Text similarity search with vector fields
This post explores how text embeddings and Elasticsearch’s new dense_vector type could be used to support similarity search.

Implementing academic papers: lessons learned from Elasticsearch and Lucene
This post shares strategies for incorporating academic papers in a software application, drawing our experiences with Elasticsearch and Lucene.