执行过程先分析select语句的解析过程:
select语句的解析集中在ParserDQL类的XreadQueryExpression方法中,其中XreadQueryExpressionBody select除了limit,offset,fetch关键字外的其他条件解析,XreadOrderByExpression是limit,offset,fetch,ordre by的解析,我们分析简单的select * from user where * order by * limit 2 offset 5 这种语句,对于嵌套子查询和联合查询待会简单过下。
QueryExpression queryExpression = XreadQueryExpressionBody(); SortAndSlice sortAndSlice = XreadOrderByExpression(); if (queryExpression.sortAndSlice == null) { queryExpression.addSortAndSlice(sortAndSlice); } else { if (queryExpression.sortAndSlice.hasLimit()) { if (sortAndSlice.hasLimit()) { throw Error.error(ErrorCode.X_42549); } for (int i = 0; i < sortAndSlice.exprList.size(); i++) { Expression e = (Expression) sortAndSlice.exprList.get(i); queryExpression.sortAndSlice.addOrderExpression(e); } } else { queryExpression.addSortAndSlice(sortAndSlice); } }
selelct top 20 和limit 20这种条件先查询,XreadTopOrLimit读取top和limit的值并作为一个条件放到SortAndSlice对象中,XreadFromClause是读取嵌套查询或者联合查询的,readWhereGroupHaving是处理where条件和group by,having的
QuerySpecification XreadQuerySpecification() { QuerySpecification select = XreadSelect(); if (!select.isValueList) { XreadTableExpression(select); } return select; } void XreadTableExpression(QuerySpecification select) { XreadFromClause(select); readWhereGroupHaving(select); }
如果有DISTINCT,则设置isDistinctSelect为true,while 循环处理每个token,XreadValueExpression 是读取对于返回的列名(包括对改列进行的一些加减等计算操作)
QuerySpecification XreadSelect() { QuerySpecification select = new QuerySpecification(compileContext); readThis(Tokens.SELECT); if (token.tokenType == Tokens.TOP || token.tokenType == Tokens.LIMIT) { SortAndSlice sortAndSlice = XreadTopOrLimit(); if (sortAndSlice != null) { select.addSortAndSlice(sortAndSlice); } } if (token.tokenType == Tokens.DISTINCT) { select.isDistinctSelect = true; read(); } else if (token.tokenType == Tokens.ALL) { read(); } while (true) { Expression e = XreadValueExpression(); if (token.tokenType == Tokens.AS) { read(); checkIsNonCoreReservedIdentifier(); } if (isNonCoreReservedIdentifier()) { e.setAlias(HsqlNameManager.getSimpleName(token.tokenString, isDelimitedIdentifier())); read(); } select.addSelectColumnExpression(e); if (token.tokenType == Tokens.FROM) { break; } if (token.tokenType == Tokens.INTO) { break; } if (readIfThis(Tokens.COMMA)) { continue; } if (token.tokenType == Tokens.CLOSEBRACKET || token.tokenType == Tokens.X_ENDPARSE) { if (database.sqlSyntaxMss || database.sqlSyntaxMys || database.sqlSyntaxPgs) { Expression[] exprList = new Expression[select.exprColumnList.size()]; select.exprColumnList.toArray(exprList); Expression valueList = new Expression(OpTypes.VALUELIST, exprList); for (int i = 0; i < valueList.nodes.length; i++) { if (valueList.nodes[i].opType != OpTypes.ROW) { valueList.nodes[i] = new Expression(OpTypes.ROW, new Expression[]{ valueList.nodes[i] }); } } compileContext.incrementDepth(); TableDerived td = prepareSubqueryTable(valueList, OpTypes.VALUELIST); select = new QuerySpecification(session, td, compileContext, true); compileContext.decrementDepth(); return select; } } throw unexpectedToken(); } return select; }
XreadAllTypesCommonValueExpression读取每列的一些四则运算,然后如果有括号还得做优先级处理
Expression XreadValueExpression() { Expression e = XreadAllTypesCommonValueExpression(true); if (token.tokenType == Tokens.LEFTBRACKET) { read(); Expression e1 = XreadNumericValueExpression(); readThis(Tokens.RIGHTBRACKET); e = new ExpressionAccessor(e, e1); } return e; }
这个是先处理加减操作,乘除操作优先级高会在XreadTerm中先封装成一个Expression对象,然后在作为操作放到当前加减操作的Expression中,也就是Expression中可以嵌套多级Expression对象,
Expression XreadNumericValueExpression() { Expression e = XreadTerm(); while (true) { int type; if (token.tokenType == Tokens.PLUS) { type = OpTypes.ADD; } else if (token.tokenType == Tokens.MINUS) { type = OpTypes.SUBTRACT; } else { break; } read(); Expression a = e; e = XreadTerm(); e = new ExpressionArithmetic(type, a, e); } return e; }
XreadFactor读取常量部分
Expression XreadTerm() { Expression e = XreadFactor(); int type; while (true) { if (token.tokenType == Tokens.ASTERISK) { type = OpTypes.MULTIPLY; } else if (token.tokenType == Tokens.DIVIDE) { type = OpTypes.DIVIDE; } else { break; } read(); Expression a = e; e = XreadFactor(); if (e == null) { throw unexpectedToken(); } e = new ExpressionArithmetic(type, a, e); } return e; }
prepareSubqueryTable 子查询就不跟踪了,
预编译就是把sql的解析过程缓存起来,保存在statementManger中,下次就不用再解析编译了。
select的字段内容如下:select atime, adate from dttest where adate < '16:44:31'
返回字段只有atime,别名为adate
我们看看编译后的表达式内容 第一个node表示是where语句第一个条件左边的表达式,adate optype=2表示VALUE,也就是字段值,第一个node的optype为1,表示常量。而整个Expression的optype是44,表示第一个node小于第二个node的值,当然还有每个node的类型
看执行过程了,这sql有点问题把where语句的adate改为atime,要不然报日期格式不对,
metaData表示返回的列类型
回到QuerySpecification类的buildResult(Session session, int[] limits) 返回limits表示limit,offset,top的参数,这里没有指定,则默认取所有记录,isSipleCount是取聚合的而一个值的操作,返回总数之类的操作,
我们这里是执行到这里的查询逻辑,这里取出我们需要的列值
在这里执行索引查询where条件查询it.next()
RangeIterator it = rangeIterators[currentIndex]; if (it.next()) { if (currentIndex < rangeVariables.length - 1) { currentIndex++; continue; } } else { it.reset(); currentIndex--; continue; }
it next方法保存当前indexAVL树中查找到的节点,然后查找下一个满足条件的记录
在RangeVariable执行这个next操作,操作结果就保存在session,后面从session中就能获取数据了。 data[i] = exprColumns[i].getValue(session); 在这里获取下一个数据。
public boolean next() { while (condIndex < conditions.length) { if (isBeforeFirst) { isBeforeFirst = false; initialiseIterator(); } boolean result = findNext(); if (result) { return true; } reset(); condIndex++; } condIndex = 0; return false; }
查询索引的过程 首先在AVL树中查询满足条件的第一个结果节点,然后findnext就在这个节点上查找下一个节点
indexAVL类的findNode
public RowIterator findFirstRow(Session session, PersistentStore store, Object[] rowdata, int matchCount, int distinctCount, int compareType, boolean reversed, boolean[] map) { if (compareType == OpTypes.MAX) { return lastRow(session, store); } NodeAVL node = findNode(session, store, rowdata, defaultColMap, matchCount, compareType, TransactionManager.ACTION_READ, reversed); if (node == null) { return emptyIterator; } return new IndexRowIterator(session, store, this, node, distinctCount, false, reversed); }
/** * Finds a match with a row from a different table * * @param session Session * @param store PersistentStore * @param rowdata array containing data for the index columns * @param rowColMap map of the data to columns * @param fieldCount int * @param compareType int * @param readMode int * @return matching node or null */ NodeAVL findNode(Session session, PersistentStore store, Object[] rowdata, int[] rowColMap, int fieldCount, int compareType, int readMode, boolean reversed) { readLock.lock(); try { NodeAVL x = getAccessor(store); NodeAVL n = null; NodeAVL result = null; Row currentRow = null; if (compareType != OpTypes.EQUAL && compareType != OpTypes.IS_NULL) { fieldCount--; } while (x != null) { currentRow = x.getRow(store); int i = 0; if (fieldCount > 0) { i = compareRowNonUnique(session, currentRow.getData(), rowdata, rowColMap, fieldCount); } if (i == 0) { switch (compareType) { case OpTypes.IS_NULL : case OpTypes.EQUAL : { result = x; n = x.getLeft(store); break; } case OpTypes.NOT : case OpTypes.GREATER : { i = compareObject(session, currentRow.getData(), rowdata, rowColMap, fieldCount); if (i <= 0) { n = x.getRight(store); } else { result = x; n = x.getLeft(store); } break; } case OpTypes.GREATER_EQUAL : { i = compareObject(session, currentRow.getData(), rowdata, rowColMap, fieldCount); if (i < 0) { n = x.getRight(store); } else { result = x; n = x.getLeft(store); } break; } case OpTypes.SMALLER : { i = compareObject(session, currentRow.getData(), rowdata, rowColMap, fieldCount); if (i < 0) { result = x; n = x.getRight(store); } else { n = x.getLeft(store); } break; } case OpTypes.SMALLER_EQUAL : { i = compareObject(session, currentRow.getData(), rowdata, rowColMap, fieldCount); if (i <= 0) { result = x; n = x.getRight(store); } else { n = x.getLeft(store); } break; } default : Error.runtimeError(ErrorCode.U_S0500, "Index"); } } else if (i < 0) { n = x.getRight(store); } else if (i > 0) { n = x.getLeft(store); } if (n == null) { break; } x = n; } // MVCC 190 if (session == null) { return result; } while (result != null) { currentRow = result.getRow(store); if (session.database.txManager.canRead(session, currentRow, readMode, colIndex)) { break; } result = reversed ? last(store, result) : next(store, result); if (result == null) { break; } currentRow = result.getRow(store); if (fieldCount > 0 && compareRowNonUnique( session, currentRow.getData(), rowdata, rowColMap, fieldCount) != 0) { result = null; break; } } return result; } finally { readLock.unlock(); } }
/** * Advances to the next available value. <p> * * @return true if a next value is available upon exit */ private boolean findNext() { boolean result = false; while (true) { currentRow = it.getNextRow(); if (currentRow == null) { break; } currentData = currentRow.getData(); if (conditions[condIndex].terminalCondition != null && !conditions[condIndex].terminalCondition .testCondition(session)) { break; } if (conditions[condIndex].indexEndCondition != null && !conditions[condIndex].indexEndCondition .testCondition(session)) { if (!conditions[condIndex].isJoin) { hasLeftOuterRow = false; } break; } if (joinConditions[condIndex].nonIndexCondition != null && !joinConditions[condIndex].nonIndexCondition .testCondition(session)) { continue; } if (whereConditions[condIndex].nonIndexCondition != null && !whereConditions[condIndex].nonIndexCondition .testCondition(session)) { hasLeftOuterRow = false; addFoundRow(); continue; } Expression e = conditions[condIndex].excludeConditions; if (e != null && e.testCondition(session)) { continue; } addFoundRow(); hasLeftOuterRow = false; return true; } it.release(); currentRow = null; currentData = rangeVar.emptyData; if (hasLeftOuterRow && condIndex == conditions.length - 1) { result = (whereConditions[condIndex].nonIndexCondition == null || whereConditions[condIndex].nonIndexCondition .testCondition(session)); hasLeftOuterRow = false; } return result; } private void addFoundRow() { if (rangeVar.isRightJoin) { lookup.add(currentRow.getPos()); } }
比如一个notequal操作会在这里的testCondition判断是否符合
相关推荐
简单的创建两个例子,如何创建hsqldb,如何使用hsqldb查询数据等。
对hsqldb的缓存和表格建立,数据存储机制以及使用eclipse的调试步骤做了详尽的分析,有截图辅助
Oracle数据库移植到HsqlDB操作手册
hsqldb确实是个好东西,对于系统演示,开发等都很方便。
hsqldb的最新1.9.0版本,支持windows操作系统
HSQLDB 1.8.0
hsqldb 2 3 2 zip HyperSQL是用Java编写的一款SQL关系数据库引擎 它的核心完全是多线程的 支持双向锁和MVCC 多版本并发控制 几乎完整支持ANSI 92 SQL 支持常见数据类型 最新版本增加了对BLOB和CLOB数据的支持 最高...
src:HSQLDB数据库的最新源代码,在源代码中附加了轻松分析理解代码的注释 把代码引入Eclipse 运行mvn eclipse:eclipse生成Eclipse项目,打开Eclipse,选择File-> Import-> Existing Projects into Workspace 运行...
hsqldb jdbc driver适合于hsqldb
数据库连接 hsqldb1.7.1.jar
hsqldb jar
hsqldb for jdk1.5因为hsqldb的2.0版本以上都是jdk1.6进行编译的,我提供的这个2.25版本的是jdk1.5编译的。
HSQLDB中文手册
HSQLDB中文文档,能够提高数据的处理速度。
hsqldb可能需要的额外jar包 Additional Jar files needed for hsqldb.
HSQLDB数据库,经典啊
HyperSQL User Guide HyperSQL Database Engine (HSQLDB) 2.2 hsqldb-2.2.9用户手册
hsqldb-2.4.0
NULL 博文链接:https://songwensheng.iteye.com/blog/855119
hsqldb是我们进行Java开发的最常用的数据库之一,希望大家能够喜欢