- 论坛徽章:
- 0
|
下边为StandardTokenizer.jj的代码,省略了前面的注释!使用Javacc的语法写成的,学过javacc的语法之后,在看这些代码就
比较容易了,而且整个Standard包下其他很多代码都是由此文件生成的.根据此文件可以看StandardTokenizer用来实现区分
token,从而StandardAnalyzer使用的分词方法就是单字切分.
下边为StandardTokenizer.jj的代码,省略了前面的注释!使用Javacc的语法写成的,学过javacc的语法之后,在看这些
代码就比较容易了,而且整个Standard包下其他很多代码都是由此文件生成的.根据此文件可以看StandardTokenizer用来实现区分
token,从而StandardAnalyzer使用的分词方法就是单字切分.
options {
STATIC = false;
//IGNORE_CASE = true;
//BUILD_PARSER = false;
UNICODE_INPUT = true;
USER_CHAR_STREAM = true;
OPTIMIZE_TOKEN_MANAGER = true;
//DEBUG_TOKEN_MANAGER = true;
}
PARSER_BEGIN(StandardTokenizer)
package org.apache.lucene.analysis.standard;
/** A grammar-based tokenizer constructed with JavaCC.
*
* This should be a good tokenizer for most European-language documents.
*
* Many applications have specific tokenizer needs. If this tokenizer does
* not suit your application, please consider copying this source code
* directory to your project and maintaining your own grammar-based tokenizer.
*基于语法的tokenizer
*/
public class StandardTokenizer extends org.apache.lucene.analysis.Tokenizer {
/** Constructs a tokenizer for this Reader. */
public StandardTokenizer(Reader reader) {
this(new FastCharStream(reader));
this.input = reader;
}
}
PARSER_END(StandardTokenizer)
/*对于语法的基本表示
* | 或者
* + 一个或者无限个重复
* * 零个或者无限个重复
* [ ] 表示里面所有的选项取一
* 表示使用别的定义
*
*/
TOKEN : { // token patterns
//使用正则表达式进行描述
// basic word: a sequence of digits & letters example:13j14234n4k32
//字母数字序列
|)+ >
// internal apostrophes: O'Reilly, you're, O'Reilly's
// use a post-filter to remove possesives
//缩略语形式
| ("'" )+ >
// acronyms: U.S.A., I.B.M., etc.
// use a post-filter to remove dots
//缩写词
| "." ( ".")+ >
// company names like AT&T and
[email=Excite@Home]Excite@Home[/email]
.
//公司
| ("&"|"@") >
// email addresses
zhangbufeng@163.com
//email地址
| (("."|"-"|"_") )* "@" (("."|"-") )+ >
// hostname 202.113.9.183
| ("." )+ >
// floating point, serial, model numbers, ip addresses, etc.
// every other segment must have at least one digit
|
|
| ( )+
| ( )+
| ( )+
| ( )+
)
>
|
| |)*
(|)*
>
| )+> //字母序列
|
|
|
}
SKIP : { // skip unrecognized chars
}
/** Returns the next token in the stream, or null at EOS.
* The returned token's type is set to an element of
[email=%7B@link]{@link[/email]
* StandardTokenizerConstants#tokenImage}.
*/
//调用了org.apache.lucene.analysis.Token
org.apache.lucene.analysis.Token next() throws IOException :
{
//token的初始化
Token token = null;
}
{
( token = |
token = |
token = |
token = |
token = |
token = |
token = |
token = |
token =
)
{
if (token.kind == EOF) {
return null;
} else {
//返回一下个token,从这里可以看出StandardAnalyzer使用的为单字切分,对于所有的unicode字符
return
new org.apache.lucene.analysis.Token(token.image,
token.beginColumn,token.endColumn,
tokenImage[token.kind]);
}
}
}
修改该文件之后使用javacc生成自己的.java程序进行分词
本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u/20045/showart_410777.html |
|