前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >自己动手写数据库系统:实现一个小型SQL解释器(上)

自己动手写数据库系统:实现一个小型SQL解释器(上)

作者头像
望月从良
发布2023-09-02 10:45:24
4050
发布2023-09-02 10:45:24
举报
文章被收录于专栏:Coding迪斯尼

数据库系统有一个核心部件,那就是SQL解释器。用过mySQL的同学都知道,我们需要写一系列由SQL语言组成的代码来驱动数据库的运行,由此它就必须要有一个SQL语言解释器来解读SQL代码,然后根据代码的意图来驱动数据库执行相应的操作,本节我们就完成一个简单的SQL解释器。

解释器的原理基于编译原理,我在B站上专门有视频解释编译原理算法,因此我在这里不再赘述。实现一个解释器的首要步骤就是完成一个词法解析器,我在B站编译原理视频中实现过一个小型编译器(dragon-compiler),因此我将其对应的词法解析器直接拿过来稍作改动,让其能对SQL代码进行词法解析。首先我们把其中的lexer部分直接拷贝到我们现在的项目,打开其中的token.go文件,我们首先修改其中token的定义,将SQL语言中关键字的定义添加进去,然后去除与 SQL无关的定义,修改后代码如下:

代码语言:javascript
复制
package lexer

type Tag uint32

const (
    //AND 对应SQL关键字
    AND Tag = iota + 256
    //BREAK
    //DO
    EQ
    FALSE
    GE
    ID
    //IF
    //ELSE
    INDEX
    LE
    INT
    FLOAT
    MINUS
    PLUS
    NE
    NUM
    //OR
    REAL
    //TRUE
    //WHILE
    LEFT_BRACE    // "{"
    RIGHT_BRACE   // "}"
    LEFT_BRACKET  //"("
    RIGHT_BRACKET //")"
    AND_OPERATOR
    OR_OPERATOR
    ASSIGN_OPERATOR
    NEGATE_OPERATOR
    LESS_OPERATOR
    GREATER_OPERATOR
    BASIC //对应int , float, bool, char 等类型定义
    //TEMP  //对应中间代码的临时寄存器变量
    //SEMICOLON

    //新增SQL对应关键字
    SELECT
    FROM
    WHERE
    INSERT
    INTO
    VALUES
    DELETE
    UPDATE
    SET
    CREATE
    TABLE
    INT
    VARCHAR
    VIEW
    AS
    INDEX
    ON
    COMMA
    STRING
    //SQL关键字定义结束
    EOF

    ERROR
)

var token_map = make(map[Tag]string)

func init() {
    //初始化SQL关键字对应字符串
    token_map[AND] = "AND"
    token_map[SELECT] = "SELECT"
    token_map[WHERE] = "where"
    token_map[INSERT] = "INSERT"
    token_map[INTO] = "INTO"
    token_map[VALUES] = "VALUES"
    token_map[DELETE] = "DELETE"
    token_map[UPDATE] = "UPDATE"
    token_map[SET] = "SET"
    token_map[CREATE] = "CREATE"
    token_map[TABLE] = "TABLE"
    token_map[INT] = "INT"
    token_map[VARCHAR] = "VARCHAR"
    token_map[VIEW] = "VIEW"
    token_map[AS] = "AS"
    token_map[INDEX] = "INDEX"
    token_MAP[ON] = "ON"
    token_map[COMMA] = ","
    token_map[BASIC] = "BASIC"
    //token_map[DO] = "do"
    //token_map[ELSE] = "else"
    token_map[EQ] = "EQ"
    token_map[FALSE] = "FALSE"
    token_map[GE] = "GE"
    token_map[ID] = "ID"
    //token_map[IF] = "if"
    token_map[INT] = "int"
    token_map[FLOAT] = "float"

    token_map[LE] = "<="
    token_map[MINUS] = "-"
    token_map[PLUS] = "+"
    token_map[NE] = "!="
    token_map[NUM] = "NUM"
    token_map[OR] = "OR"
    token_map[REAL] = "REAL"
    //token_map[TEMP] = "t"
    token_map[TRUE] = "TRUE"
    //token_map[WHILE] = "while"
    //token_map[DO] = "do"
    //token_map[BREAK] = "break"
    token_map[AND_OPERATOR] = "&"
    token_map[OR_OPERATOR] = "|"
    token_map[ASSIGN_OPERATOR] = "="
    token_map[NEGATE_OPERATOR] = "!"
    token_map[LESS_OPERATOR] = "<"
    token_map[GREATER_OPERATOR] = ">"
    token_map[LEFT_BRACE] = "{"
    token_map[RIGHT_BRACE] = "}"
    token_map[LEFT_BRACKET] = "("
    token_map[RIGHT_BRACKET] = ")"
    token_map[EOF] = "EOF"
    token_map[ERROR] = "ERROR"
    //token_map[SEMICOLON] = ";"

}

type Token struct {
    lexeme string
    Tag    Tag
}

func (t *Token) ToString() string {
    if t.lexeme == "" {
        return token_map[t.Tag]
    }

    return t.lexeme
}

func NewToken(tag Tag) Token {
    return Token{
        lexeme: "",
        Tag:    tag,
    }
}

func NewTokenWithString(tag Tag, lexeme string) *Token {
    return &Token{
        lexeme: lexeme,
        Tag:    tag,
    }
}

在上面代码修改中,我们把原来C语言的关键字去掉,增加了一系列SQL语言对应的关键字。打开文件word_token.go,做如下修改:

代码语言:javascript
复制
package lexer

type Word struct {
    lexeme string
    Tag    Token
}

func NewWordToken(s string, tag Tag) Word {
    return Word{
        lexeme: s,
        Tag:    NewToken(tag),
    }
}

func (w *Word) ToString() string {
    return w.lexeme
}

func GetKeyWords() []Word {
    key_words := []Word{}
    key_words = append(key_words, NewWordToken("||", OR))
    key_words = append(key_words, NewWordToken("==", EQ))
    key_words = append(key_words, NewWordToken("!=", NE))
    key_words = append(key_words, NewWordToken("<=", LE))
    key_words = append(key_words, NewWordToken(">=", GE))
    //增加SQL语言对应关键字
    key_words = append(key_words, NewWordToken("AND", AND))
    key_words = append(key_words, NewWordToken("SELECT", SELECT))
    key_words = append(key_words, NewWordToken("FROM", FROM))
    key_words = append(key_words, NewWordToken("INSERT", INSERT))
    key_words = append(key_words, NewWordToken("INTO", INTO))
    key_words = append(key_words, NewWordToken("VALUES", VALUES))
    key_words = append(key_words, NewWordToken("DELETE", DELETE))
    key_words = append(key_words, NewWordToken("UPDATE", UPDATE))
    key_words = append(key_words, NewWordToken("SET", SET))
    key_words = append(key_words, NewWordToken("CREATE", CREATE))
    key_words = append(key_words, NewWordToken("TABLE", TABLE))
    key_words = append(key_words, NewWordToken("INT", INT))
    key_words = append(key_words, NewWordToken("VARCHAR", VARCHAR))
    key_words = append(key_words, NewWordToken("VIEW", VIEW))
    key_words = append(key_words, NewWordToken("AS", AS))
    key_words = append(key_words, NewWordToken("INDEX", INDEX))
    key_words = append(key_words, NewWordToken("ON", ON))

    //key_words = append(key_words, NewWordToken("minus", MINUS))
    //key_words = append(key_words, NewWordToken("true", TRUE))
    //key_words = append(key_words, NewWordToken("false", FALSE))
    //key_words = append(key_words, NewWordToken("if", IF))
    //key_words = append(key_words, NewWordToken("else", ELSE))
    //增加while, do关键字
    //key_words = append(key_words, NewWordToken("while", WHILE))
    //key_words = append(key_words, NewWordToken("do", DO))
    //key_words = append(key_words, NewWordToken("break", BREAK))
    //添加类型定义
    //key_words = append(key_words, NewWordToken("int", BASIC))
    //key_words = append(key_words, NewWordToken("float", BASIC))
    //key_words = append(key_words, NewWordToken("bool", BASIC))
    //key_words = append(key_words, NewWordToken("char", BASIC))

    return key_words
}

这里的修改中也是把原来对应C语言的关键字去掉,增加上SQL语言的关键字定义。除了这些修改外,lexer的基本逻辑没有什么变化,其代码如下(lexer.go):

代码语言:javascript
复制
package lexer

import (
    "bufio"
    "strconv"
    "strings"
    "unicode"
)

type Lexer struct {
    Lexeme       string
    lexemeStack  []string
    tokenStack   []Token
    peek         byte
    Line         uint32
    reader       *bufio.Reader
    read_pointer int
    key_words    map[string]Token
}

func NewLexer(source string) Lexer {
    str := strings.NewReader(source)
    source_reader := bufio.NewReaderSize(str, len(source))
    lexer := Lexer{
        Line:      uint32(1),
        reader:    source_reader,
        key_words: make(map[string]Token),
    }

    lexer.reserve()

    return lexer
}

func (l *Lexer) ReverseScan() {
    /*
        back_len := len(l.Lexeme)
        只能un read 一个字节
        for i := 0; i < back_len; i++ {
            l.reader.UnreadByte()
        }
    */
    if l.read_pointer > 0 {
        l.read_pointer = l.read_pointer - 1
    }

}

func (l *Lexer) reserve() {
    key_words := GetKeyWords()
    for _, key_word := range key_words {
        l.key_words[key_word.ToString()] = key_word.Tag
    }
}

func (l *Lexer) Readch() error {
    char, err := l.reader.ReadByte() //提前读取下一个字符
    l.peek = char
    return err
}

func (l *Lexer) ReadCharacter(c byte) (bool, error) {

    chars, err := l.reader.Peek(1)
    if err != nil {
        return false, err
    }

    peekChar := chars[0]
    if peekChar != c {
        return false, nil
    }

    l.Readch() //越过当前peek的字符
    return true, nil
}

func (l *Lexer) UnRead() error {
    return l.reader.UnreadByte()
}

func (l *Lexer) Scan() (Token, error) {

    if l.read_pointer < len(l.lexemeStack) {
        l.Lexeme = l.lexemeStack[l.read_pointer]
        token := l.tokenStack[l.read_pointer]
        l.read_pointer = l.read_pointer + 1
        return token, nil
    } else {
        l.read_pointer = l.read_pointer + 1
    }

    for {
        err := l.Readch()
        if err != nil {
            return NewToken(ERROR), err
        }

        if l.peek == ' ' || l.peek == '\t' {
            continue
        } else if l.peek == '\n' {
            l.Line = l.Line + 1
        } else {
            break
        }
    }

    l.Lexeme = ""

    switch l.peek {
    case ',':
        l.Lexeme = ","
        l.lexemeStack = append(l.lexemeStack, l.Lexeme)
        token := NewToken(COMMA)
        l.tokenStack = append(l.tokenStack, token)
        return token, nil
    case '{':
        l.Lexeme = "{"
        l.lexemeStack = append(l.lexemeStack, l.Lexeme)
        token := NewToken(LEFT_BRACE)
        l.tokenStack = append(l.tokenStack, token)
        return token, nil
    case '}':
        l.Lexeme = "}"
        l.lexemeStack = append(l.lexemeStack, l.Lexeme)
        token := NewToken(RIGHT_BRACE)
        l.tokenStack = append(l.tokenStack, token)
        return token, nil
    case '+':
        l.Lexeme = "+"
        l.lexemeStack = append(l.lexemeStack, l.Lexeme)
        token := NewToken(PLUS)
        l.tokenStack = append(l.tokenStack, token)
        return token, nil
    case '-':
        l.Lexeme = "-"
        l.lexemeStack = append(l.lexemeStack, l.Lexeme)
        token := NewToken(MINUS)
        l.tokenStack = append(l.tokenStack, token)
        return token, nil
    case '(':
        l.Lexeme = "("
        l.lexemeStack = append(l.lexemeStack, l.Lexeme)
        token := NewToken(LEFT_BRACKET)
        l.tokenStack = append(l.tokenStack, token)
        return token, nil
    case ')':
        l.Lexeme = ")"
        l.lexemeStack = append(l.lexemeStack, l.Lexeme)
        token := NewToken(RIGHT_BRACKET)
        l.tokenStack = append(l.tokenStack, token)
        return token, nil
    case '&':
        l.Lexeme = "&"
        if ok, err := l.ReadCharacter('&'); ok {
            l.Lexeme = "&&"
            word := NewWordToken("&&", AND)
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            l.tokenStack = append(l.tokenStack, word.Tag)
            return word.Tag, err
        } else {
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            token := NewToken(AND_OPERATOR)
            l.tokenStack = append(l.tokenStack, token)
            return token, err
        }
    case '|':
        l.Lexeme = "|"
        if ok, err := l.ReadCharacter('|'); ok {
            l.Lexeme = "||"
            word := NewWordToken("||", OR)
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            l.tokenStack = append(l.tokenStack, word.Tag)
            return word.Tag, err
        } else {
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            token := NewToken(OR_OPERATOR)
            l.tokenStack = append(l.tokenStack, token)
            return token, err
        }

    case '=':
        l.Lexeme = "="
        if ok, err := l.ReadCharacter('='); ok {
            l.Lexeme = "=="
            word := NewWordToken("==", EQ)
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            l.tokenStack = append(l.tokenStack, word.Tag)
            return word.Tag, err
        } else {
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            token := NewToken(ASSIGN_OPERATOR)
            l.tokenStack = append(l.tokenStack, token)
            return token, err
        }

    case '!':
        l.Lexeme = "!"
        if ok, err := l.ReadCharacter('='); ok {
            l.Lexeme = "!="
            word := NewWordToken("!=", NE)
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            l.tokenStack = append(l.tokenStack, word.Tag)
            return word.Tag, err
        } else {
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            token := NewToken(NEGATE_OPERATOR)
            l.tokenStack = append(l.tokenStack, token)
            return token, err
        }

    case '<':
        l.Lexeme = "<"
        if ok, err := l.ReadCharacter('='); ok {
            l.Lexeme = "<="
            word := NewWordToken("<=", LE)
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            l.tokenStack = append(l.tokenStack, word.Tag)
            return word.Tag, err
        } else {
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            token := NewToken(LESS_OPERATOR)
            l.tokenStack = append(l.tokenStack, token)
            return token, err
        }

    case '>':
        l.Lexeme = ">"
        if ok, err := l.ReadCharacter('='); ok {
            l.Lexeme = ">="
            word := NewWordToken(">=", GE)
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            l.tokenStack = append(l.tokenStack, word.Tag)
            return word.Tag, err
        } else {
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            token := NewToken(GREATER_OPERATOR)
            l.tokenStack = append(l.tokenStack, token)
            return token, err
        }

    case '"':
        for {
            err := l.Readch()
            if l.peek == '"' {
                haveSeenQuote = false
                l.lexemeStack = append(l.lexemeStack, l.Lexeme)
                token := NewToken(STRING)
                l.tokenStack = append(l.tokenStack, token)
                return token, nil
            }

            if err != nil {
                panic("string no end with quota")
            }
            l.Lexeme += string(l.peek)
        }

    }

    if unicode.IsNumber(rune(l.peek)) {
        var v int
        var err error
        for {
            num, err := strconv.Atoi(string(l.peek))
            if err != nil {
                if l.peek != 0 { //l.peek == 0 意味着已经读完所有字符
                    l.UnRead() //将字符放回以便下次扫描
                }

                break
            }
            v = 10*v + num
            l.Lexeme += string(l.peek)
            l.Readch()
        }

        if l.peek != '.' {
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            token := NewToken(NUM)
            token.lexeme = l.Lexeme
            l.tokenStack = append(l.tokenStack, token)
            return token, err
        }
        l.Lexeme += string(l.peek)
        l.Readch() //越过 "."

        x := float64(v)
        d := float64(10)
        for {
            l.Readch()
            num, err := strconv.Atoi(string(l.peek))
            if err != nil {
                if l.peek != 0 { //l.peek == 0 意味着已经读完所有字符
                    l.UnRead() //将字符放回以便下次扫描
                }

                break
            }

            x = x + float64(num)/d
            d = d * 10
            l.Lexeme += string(l.peek)
        }
        l.lexemeStack = append(l.lexemeStack, l.Lexeme)
        token := NewToken(REAL)
        token.lexeme = l.Lexeme
        l.tokenStack = append(l.tokenStack, token)
        return token, err
    }

    if unicode.IsLetter(rune(l.peek)) {
        var buffer []byte
        for {
            buffer = append(buffer, l.peek)
            l.Lexeme += string(l.peek)

            l.Readch()
            if !unicode.IsLetter(rune(l.peek)) {
                if l.peek != 0 { //l.peek == 0 意味着已经读完所有字符
                    l.UnRead() //将字符放回以便下次扫描
                }
                break
            }
        }

        s := string(buffer)
        token, ok := l.key_words[s]
        if ok {
            l.lexemeStack = append(l.lexemeStack, l.Lexeme)
            l.tokenStack = append(l.tokenStack, token)
            return token, nil
        }

        l.lexemeStack = append(l.lexemeStack, l.Lexeme)
        token = NewToken(ID)
        token.lexeme = l.Lexeme
        l.tokenStack = append(l.tokenStack, token)
        return token, nil
    }

    return NewToken(EOF), nil
}

为了节省篇幅,这里我没有把所有文件对应改动都贴出来,请在B站搜索”coding迪斯尼”查看详细内容,下面我们调用上面实现的代码试试效果,在main.go中添加如下测试代码:

代码语言:javascript
复制
import (
    "fmt"
    "lexer"
)

func main() {
    sqlLexer := lexer.NewLexer("select name , sex from student where age > 20")
    var tokens []*lexer.Token
    tokens = append(tokens, lexer.NewTokenWithString(lexer.SELECT, "select"))
    tokens = append(tokens, lexer.NewTokenWithString(lexer.ID, "name"))
    tokens = append(tokens, lexer.NewTokenWithString(lexer.COMMA, ","))
    tokens = append(tokens, lexer.NewTokenWithString(lexer.ID, "sex"))
    tokens = append(tokens, lexer.NewTokenWithString(lexer.FROM, "from"))
    tokens = append(tokens, lexer.NewTokenWithString(lexer.ID, "student"))
    tokens = append(tokens, lexer.NewTokenWithString(lexer.WHERE, "where"))
    tokens = append(tokens, lexer.NewTokenWithString(lexer.ID, "age"))
    tokens = append(tokens, lexer.NewTokenWithString(lexer.GREATER_OPERATOR, ">"))
    tokens = append(tokens, lexer.NewTokenWithString(lexer.NUM, "20"))

    for _, tok := range tokens {
        sqlTok, err := sqlLexer.Scan()
        if err != nil {
            fmt.Println("lexer error")
            break
        }

        if sqlTok.Tag != tok.Tag {
            errText := fmt.Sprintf("token err, expect: %v, but got %v\n", tok, sqlTok)
            fmt.Println(errText)
            break
        }
    }

    fmt.Println("lexer testing pass...")
}

通过运行可以发现,最后一句”lexer testing pass…”能正常打印出来,因此词法解析器的基本逻辑是正确的。接下来看看语法解析的实现,基于篇幅所限,这里我们只处理SQL的一小部分,有兴趣的同学可以自行补全我们这里完成的SQL解释器,首先我们先定义要解析的SQL语法部分:

FIELD -> ID CONSTANT -> STRING | NUM EXPRESSION -> FIELD | CONSTANT TERM -> EXPRESSION EQ EXPRESSION PREDICATE -> TERM (AND PREDICATE)?

QUERY -> SELECT SELECT_LIST FROM TABLE_LIST (WHERE PREDICATE)? SELECTION_LIST -> FIELD (COMMA SELECTION_LIST)? TABLE_LIST -> ID (COMMA TABLE_LIST)?

UPDATE_COMMAND -> INSERT_COMMAND | DELETE_COMMAND | MODIFY_COMMAND | CREATE_COMMAND CREATE_COMMAND -> CREATE_TABLE | CREATE_VIEW | CREATE_INDEX INSERT_COMMAND -> INSERT INTO ID LEFT_BRACE FIELD_LIST RIGHT_BRACE VALUES CONSTANT_LIST FIELD_LIST -> FIELD (COMMA FIELD_LIST)? CONSTANT_LIST -> CONSTANT (COMMA CONSTANT_LIST)?

DELETE_COMMAND -> DELETE FROM ID (WHERE PREDICATE)?

MODIFY_COMMAND -> UPDATE ID SET FIELD EQ EXPRESSION (WHERE PREDICATE)?

CREATE_TABLE -> CREATE TABLE FIELD_DEFS FIELD_DEFS -> FIELD_DEF (COMMA FIELD_DEFS)? FIELD_DEF -> ID TYPE_DEF TYPE_DEF -> INT | VARCHAR LEFT_BRACE NUM RIGHT_BRACE

CREATE_VIEW -> CREATE VIEW ID AS QUERY CREATE_INDEX -> CREATE INDEX ID ON ID LEFT_BRACE FIELD RIGHT_BRACE

接下来我们看看如何通过上面语法规则对SQL代码进行解析。这里我们采用自顶向下的递归式解析法,具体算法过程可以参考我在b站的编译原理视频。在工程中新建一个文件夹叫parser,然后再里面添加parser.go文件,为了简单起见,我们一次完成一小部分,然后调用完成的代码看看结果是否正确,首先我们完成TERM这条规则的解析,代码如下:

代码语言:javascript
复制
package parser

import (
    "lexer"
    "query"
    "strconv"
    "strings"
)

type SQLParser struct {
    sqlLexer lexer.Lexer
}

func NewSQLParser(s string) *SQLParser {
    return &SQLParser{
        sqlLexer: lexer.NewLexer(s),
    }
}

func (p *SQLParser) Field() (lexer.Token, string) {
    tok, err := p.sqlLexer.Scan()
    if err != nil {
        panic(err)
    }
    if tok.Tag != lexer.ID {
        panic("Tag of FIELD is no ID")
    }

    return tok, p.sqlLexer.Lexeme
}

func (p *SQLParser) Constant() *query.Constant {
    tok, err := p.sqlLexer.Scan()
    if err != nil {
        panic(err)
    }

    switch tok.Tag {
    case lexer.STRING:
        s := strings.Clone(p.sqlLexer.Lexeme)
        return query.NewConstantWithString(&s)
        break
    case lexer.NUM:
        //注意堆栈变量在函数执行后是否会变得无效
        v, err := strconv.Atoi(p.sqlLexer.Lexeme)
        if err != nil {
            panic("string is not a number")
        }
        return query.NewConstantWithInt(&v)
        break
    default:
        panic("token is not string or num when parsing constant")
    }

    return nil
}

func (p *SQLParser) Expression() *query.Expression {
    tok, err := p.sqlLexer.Scan()
    if err != nil {
        panic(err)
    }

    if tok.Tag == lexer.ID {
        p.sqlLexer.ReverseScan()
        _, str := p.Field()
        return query.NewExpressionWithString(str)
    } else {
        p.sqlLexer.ReverseScan()
        constant := p.Constant()
        return query.NewExpressionWithConstant(constant)
    }
}

func (p *SQLParser) Term() *query.Term {
    lhs := p.Expression()
    tok, err := p.sqlLexer.Scan()
    if err != nil {
        panic(err)
    }
    if tok.Tag != lexer.ASSIGN_OPERATOR {
        panic("should have = in middle of term")
    }

    rhs := p.Expression()
    return query.NewTerm(lhs, rhs)
}

TERM规则解析的是类似这样的表达式”age < 20”, “name = ‘jim’ “等尝出现在where 右边的表达式。我们调用上面解析代码进行测试看看,在main.go中输入如下代码:

代码语言:javascript
复制
import (
    "fmt"
    "parser"
)

func main() {
    sqlParser := parser.NewSQLParser("age = 20")
    term := sqlParser.Term()
    s := fmt.Sprintf("term: %v\n", term)
    fmt.Println(s)
}

请到B站查看我对上面代码进行调试演示的过程,这样更容易理解和吃透代码逻辑。接下来我们继续完成如下语法的解析: PREDICATE -> TERM (AND PREDICATE)? QUERY -> SELECT SELECT_LIST FROM TABLE_LIST (WHERE PREDICATE)? SELECTION_LIST -> FIELD (COMMA SELECTION_LIST)? TABLE_LIST -> ID (COMMA TABLE_LIST)?

这里需要注意的是PREDICATE对应的是where 后面的部分,例如where a > b and c < d,这条语句中”a>b and c < d”就是语法中的PREDICATE,对应代码如下:

代码语言:javascript
复制

func (p *SQLParser) Predicate() *query.Predicate {
    //predicate 对应where 语句后面的判断部分,例如where a > b and c < b
    //这里的a > b and c < b就是predicate
    pred := query.NewPredicateWithTerms(p.Term())
    tok, err := p.sqlLexer.Scan()
    // 如果语句已经读取完则直接返回
    if err != nil && fmt.Sprint(err) != "EOF" {
        panic(err)
    }

    if tok.Tag == lexer.AND {
        pred.ConjoinWith(p.Predicate())
    } else {
        p.sqlLexer.ReverseScan()
    }

    return pred
}

func (p *SQLParser) Query() *QueryData {
    //query 解析select 语句
    tok, err := p.sqlLexer.Scan()
    if err != nil {
        panic(err)
    }

    if tok.Tag != lexer.SELECT {
        panic("token is not select")
    }

    fields := p.SelectList()
    tok, err = p.sqlLexer.Scan()
    if err != nil {
        panic(err)
    }

    if tok.Tag != lexer.FROM {
        panic("token is not from")
    }

    //获取select语句作用的表名
    tables := p.TableList()
    //判断select语句是否有where子句
    tok, err = p.sqlLexer.Scan()
    if err != nil {
        panic(err)
    }

    pred := query.NewPredicate()
    if tok.Tag == lexer.WHERE {
        pred = p.Predicate()
    } else {
        p.sqlLexer.ReverseScan()
    }

    return NewQueryData(fields, tables, pred)
}

func (p *SQLParser) SelectList() []string {
    //SELECT_LIST 对应select关键字后面的列名称
    l := make([]string, 0)
    _, field := p.Field()
    l = append(l, field)

    tok, err := p.sqlLexer.Scan()
    if err != nil {
        panic(err)
    }
    if tok.Tag == lexer.COMMA {
        //selct 多个列,每个列由逗号隔开
        selectList := p.SelectList()
        l = append(l, selectList...)
    } else {
        p.sqlLexer.ReverseScan()
    }

    return l
}

func (p *SQLParser) TableList() []string {
    //TBALE_LSIT对应from后面的表名
    l := make([]string, 0)
    tok, err := p.sqlLexer.Scan()
    if err != nil {
        panic(err)
    }
    if tok.Tag != lexer.ID {
        panic("token is not id")
    }

    l = append(l, p.sqlLexer.Lexeme)
    tok, err = p.sqlLexer.Scan()
    if err != nil {
        panic(err)
    }
    if tok.Tag == lexer.COMMA {
        tableList := p.TableList()
        l = append(l, tableList...)
    } else {
        p.sqlLexer.ReverseScan()
    }

    return l
}

新增一个go文件名为query_data.go,我们使用数据结构QueryData来存储select语句的解析结果,起内容如下:

代码语言:javascript
复制
package parser

//QueryData 用来描述select语句的操作信息
import (
    "query"
)

type QueryData struct {
    fields []string
    tables []string
    pred   *query.Predicate
}

func NewQueryData(fields []string, tables []string, pred *query.Predicate) *QueryData {
    return &QueryData{
        fields: fields,
        tables: tables,
        pred:   pred,
    }
}

func (q *QueryData) Fields() []string {
    return q.fields
}

func (q *QueryData) Tables() []string {
    return q.tables
}

func (q *QueryData) Pred() *query.Predicate {
    return q.pred
}

func (q *QueryData) ToString() string {
    result := "select "
    for _, fldName := range q.fields {
        result += fldName + ", "
    }

    // 去掉最后一个逗号
    result = result[:len(result)-1]
    result += " from "
    for _, tableName := range q.tables {
        result += tableName + ", "
    }
    // 去掉最后一个逗号
    result = result[:len(result)-1]
    predStr := q.pred.ToString()
    if predStr != "" {
        result += " where " + predStr
    }

    return result
}

假设有SQL语句如下:

代码语言:javascript
复制
select age, name, sex from student, department where age = 20 and sex = "male"

那么我们就能调用上面代码中的Query来启动解析,其中select后面的列表名也就是”age, name, sex”由函数SelectList负责解析,from 后面的表名由函数TableList 负责解析,where后面的内容由Predicate解析,其中他会把age = 20 和 sex = “male“ 解析成expression,我们在main.go中添加如下代码,以便调用起上面代码:

代码语言:javascript
复制
import (
    "fmt"
    "parser"
)

func main() {
    sqlParser := parser.NewSQLParser("select age, name, sex from student, department where age = 20 and sex = \"male\" ")
    queryData := sqlParser.Query()
    fmt.Println(queryData.ToString())
}

具体的调试演示过程请大家参看b站上的视频,通过调试演示我们才能更好的理解解析逻辑。由于本节内容较多,我们将其分割成几个小节来处理。

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2023-06-18,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 Coding迪斯尼 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档