1. Lucene Delete Function
/** * Delete Index * */ public void delete() { Directory dir = FSDirectory.open(new File("E:/LuceneIndex")); IndexWriter writer = null;
try { writer = new IndexWriter(dir, new IndexWriterConfig( Version.LUCENE_35, new SimpleAnalyzer(Version.LUCENE_35)));
// Param is a selector. It can be a Query or a Term // A Query is a set of conditions (id like %1%) // A Term is a specific condition (name = 1) writer.deleteDocuments(new Term("name", "FileItemIterator.java")); } catch (CorruptIndexException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (LockObtainFailedException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } finally { try { writer.close(); } catch (CorruptIndexException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }
2.
1) Like Windows, lucene provides a recycle bin for indices.
2) When we execute query, we won't get the data that has been deleted.
3) But we can fetch the indices whenever we want to rollback. And the deleted item is tag as _*_*.del
3. We can use IndexReader to get the number of deleted files
/** * Search * @throws IOException * @throws CorruptIndexException * */ public void search() throws CorruptIndexException, IOException { IndexReader reader = IndexReader.open(dir); // We can get index file count by using reader System.out.println("numDocs = " + reader.numDeletedDocs()); System.out.println("maxDocs = " + reader.maxDoc()); System.out.println("deleteDocs = " + reader.numDeletedDocs()); }
4. We can ue IndexReader to undelete deleted index files
/** * Undelete * */ public void undelete() { IndexReader reader = null; try { // param1: the directory // param2: readOnly reader = IndexReader.open(dir, false); reader.undeleteAll(); } catch (CorruptIndexException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } finally { try { reader.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }
Comments:
1) When we want to recovery the deleted files, we have to tag the readonly as false. Because by default, readonly is true.
2) After the undelete operation, the file that with the suffix of .del has gone. The data in it has been recovery into index files.
5. How do we empty recycle bin? (Delete files that with the suffix of .del)
1) Befory Lucene-3.5, this operation is called writer.optimize(). But it's now deprecated as every time we optimize, Lucene has to update all the index files. It's really high cost.
2) In/After Lucene-3.5, the operation writer.forceMerge() is the alias of writer.optimize(). They do the same operation and both are high cost.
3) So instead, we can use writer.forceMergeDeletes() to delete all deleted index files and is low cost.
6. About index file redundancy:
1)We can find that every time we execute buildIndex(), there will be another group of index files that are built.
2) As the count of execution grows, the index dir would become larger and larger. We should force the index file to update.
3) But the operation of index file update is deprecated as Lucene will maintain these index files for us automatically.
4) But we can merge index file manually.
/** * Merge */ public void merge() { IndexWriter writer = null; try { writer = new IndexWriter(dir, new IndexWriterConfig( Version.LUCENE_35, new SimpleAnalyzer(Version.LUCENE_35))); // Lucene will merge index files into two segments. The deleted item will be empty. // After Lucene-3.5, this method is deprecated. writer.forceMerge(2); } catch (CorruptIndexException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (LockObtainFailedException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } }
6. How to delete all index files every time before we build index?
/**
* Create Index
*
* @throws IOException
* @throws LockObtainFailedException
* @throws CorruptIndexException
*/
public void buildIndex() throws CorruptIndexException,
LockObtainFailedException, IOException
{
// 2. Create IndexWriter
// --> It is used to write data into index files
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_35,
new SimpleAnalyzer(Version.LUCENE_35));
IndexWriter writer = new IndexWriter(dir, config);
// This function will empty index directory.
writer.deleteAll();
// Before 3.5 the way to create index is like below(depreciated):
// new IndexWriter(Direcotry d, Analyzer a, boolean c, MaxFieldLength
// mfl);
// d: Directory, a: Analyzer, c: Shoule we create new one each time
// mlf: The max length of the field to be indexed.
// 3. Create Document
// --> The target we want to search may be a doc file or a table in DB.
// --> The path, name, size and modified date of the file.
// --> All the information of the file should be stored in the Document.
Document doc = null;
// 4. Each Item of The Document is Called a Field.
// --> The relationship of document and field is like table and cell.
// Eg. We want to build index for all the txt file in the c:/lucene dir.
// So each txt file in this dir is called a document.
// And the name, size, modified date, content is called a field.
File files = new File("E:/LuceneData");
for (File file : files.listFiles())
{
doc = new Document();
// Using FileReader, we didn't store content into index file
// doc.add(new Field("content", new FileReader(file)));
// If we want to store content into index file, we have to read
// content into string.
String content = FileUtils.readFileToString(file);
doc.add(new Field("content", content, Field.Store.YES,
Field.Index.ANALYZED));
doc.add(new Field("name", file.getName(), Field.Store.YES,
Field.Index.NOT_ANALYZED));
// Field.Store.YES --> The field should be stored in index file
// Field.Index.ANALYZED --> The filed should be participled
doc.add(new Field("path", file.getAbsolutePath(), Field.Store.YES,
Field.Index.NOT_ANALYZED));
// 5. Create Index File for Target Document by IndexWriter.
writer.addDocument(doc);
}
// 6. Close Index Writer
if (null != writer)
{
writer.close();
}
}
Comments: writer.deleteAll() --> Will delete all index files.
6. How to update index?
/** * Update * */ public void update() { IndexWriter writer = null; Document doc = null; try { writer = new IndexWriter(dir, new IndexWriterConfig( Version.LUCENE_35, new SimpleAnalyzer(Version.LUCENE_35))); doc = new Document(); doc.add(new Field("id", "1", Field.Store.YES, Field.Index.ANALYZED)); doc.add(new Field("name", "Yang", Field.Store.YES, Field.Index.NOT_ANALYZED)); doc.add(new Field("password", "Kunlun", Field.Store.YES, Field.Index.NOT_ANALYZED)); doc.add(new Field("gender", "Male", Field.Store.YES, Field.Index.NOT_ANALYZED)); doc.add(new Field("score", 110 + "", Field.Store.YES, Field.Index.NOT_ANALYZED)); /* * Actually, Lucene doesn't provide update function. The update * function is delete + add First, delete index files that match the * term Second, build new index based on doc passed in */ writer.updateDocument(new Term("name", "Davy"), doc); } catch (CorruptIndexException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (LockObtainFailedException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } }
Summary:
1. Delete: Using writer.deleteAll(); writer.delete(new Term(key, value)); writer.optimize(); writer.forceMergeDeletes(maxSegments);
2. Recovery: Using reader.undeleteAll() to recovery all items that are deleted.
3. Update: Using writer.update(new Term(key, value), doc); It will delete items that match the term and will add doc using the passing in doc.
相关推荐
Maven坐标:org.apache.lucene:lucene-core:7.7.0; 标签:apache、lucene、core、中文文档、jar包、java; 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容。 人性化翻译,文档...
Lucene是一个基于Java的全文索引工具包。 1. 基于Java的全文索引引擎Lucene简介:关于作者和Lucene的...5. Hacking Lucene:简化的查询分析器,删除的实现,定制的排序,应用接口的 扩展 6. 从Lucene我们还可以学到什么
官网的lucene全文检索引擎工具包,下载后直接解压缩即可使用
关于lucene的一些介绍。Lucene:基于Java的全文检索引擎简介
指南-Lucene:ES篇.md
由于林良益先生在2012之后未对IKAnalyzer进行更新,后续lucene分词接口发生变化,导致不可使用,所以此jar包支持lucene6.0以上版本
Lucene:基于Java的全文检索引擎简介.rar
精品资料(2021-2022收藏)Lucene:基于Java的全文检索引擎简介.doc
精品资料(2021-2022收藏)Lucene:基于Java的全文检索引擎简介.docx
精品资料(2021-2022收藏)Lucene:基于Java的全文检索引擎简介22173.doc
面试指南-Lucene/ES篇
Maven坐标:org.apache.lucene:lucene-sandbox:6.6.0; 标签:apache、lucene、sandbox、jar包、java、中文文档; 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容。 人性化翻译...
Lucene: : 用Gradle构建 基本步骤: 安装OpenJDK 11(或更高版本) 从Apache下载Lucene并解压缩 连接到安装的顶层(lucene顶层目录的父目录) 运行gradle 步骤0)设置您的开发环境(OpenJDK 11或更高版本) ...
lucene 所有jar包 包含IKAnalyzer分词器
Maven坐标:org.apache.lucene:lucene-core:7.2.1; 标签:apache、lucene、core、中文文档、jar包、java; 使用方法:解压翻译后的API文档,用浏览器打开“index.html”文件,即可纵览文档内容。 人性化翻译,文档...