• 跳转到 … +
    browser.coffee cake.coffee coffeescript.coffee command.coffee grammar.coffee helpers.coffee index.coffee lexer.coffee nodes.coffee optparse.coffee register.coffee repl.coffee rewriter.coffee scope.litcoffee sourcemap.litcoffee
  • lexer.coffee

  • §

    CoffeeScript 词法分析器。使用一系列 token 匹配正则表达式来尝试匹配源代码的开头。当找到匹配项时,会生成一个 token,我们消耗匹配项,然后重新开始。token 的形式为

    [tag, value, locationData]
    

    其中 locationData 是 {first_line, first_column, last_line, last_column, last_line_exclusive, last_column_exclusive},这是一种可以直接馈送到 Jison 的格式。这些在 coffeescript.coffee 中定义的 parser.lexer 函数中由 jison 读取。

    {Rewriter, INVERSES, UNFINISHED} = require './rewriter'
  • §

    导入我们需要的助手。

    {count, starts, compact, repeat, invertLiterate, merge,
    attachCommentsToNode, locationDataToString, throwSyntaxError
    replaceUnicodeCodePointEscapes, flatten, parseNumber} = require './helpers'
  • §

    词法分析器类

  • §
  • §

    词法分析器类读取 CoffeeScript 流并将其分成标记的 token。通过将一些额外的智能推入词法分析器,避免了语法中的一些潜在歧义。

    exports.Lexer = class Lexer
  • §

    tokenize 是词法分析器的主要方法。通过尝试一次匹配一个 token 来扫描,使用一个固定在剩余代码开头的正则表达式,或一个自定义的递归 token 匹配方法(用于插值)。当记录下下一个 token 时,我们将在代码中向前移动到 token 之后,然后重新开始。

    每个 token 化方法负责返回它消耗的字符数。

    在返回 token 流之前,将其通过 Rewriter 运行。

      tokenize: (code, opts = {}) ->
        @literate   = opts.literate  # Are we lexing literate CoffeeScript?
        @indent     = 0              # The current indentation level.
        @baseIndent = 0              # The overall minimum indentation level.
        @continuationLineAdditionalIndent = 0 # The over-indentation at the current level.
        @outdebt    = 0              # The under-outdentation at the current level.
        @indents    = []             # The stack of all current indentation levels.
        @indentLiteral = ''          # The indentation.
        @ends       = []             # The stack for pairing up tokens.
        @tokens     = []             # Stream of parsed tokens in the form `['TYPE', value, location data]`.
        @seenFor    = no             # Used to recognize `FORIN`, `FOROF` and `FORFROM` tokens.
        @seenImport = no             # Used to recognize `IMPORT FROM? AS?` tokens.
        @seenExport = no             # Used to recognize `EXPORT FROM? AS?` tokens.
        @importSpecifierList = no    # Used to identify when in an `IMPORT {...} FROM? ...`.
        @exportSpecifierList = no    # Used to identify when in an `EXPORT {...} FROM? ...`.
        @jsxDepth = 0                # Used to optimize JSX checks, how deep in JSX we are.
        @jsxObjAttribute = {}        # Used to detect if JSX attributes is wrapped in {} (<div {props...} />).
    
        @chunkLine =
          opts.line or 0             # The start line for the current @chunk.
        @chunkColumn =
          opts.column or 0           # The start column of the current @chunk.
        @chunkOffset =
          opts.offset or 0           # The start offset for the current @chunk.
        @locationDataCompensations =
          opts.locationDataCompensations or {} # The location data compensations for the current @chunk.
        code = @clean code           # The stripped, cleaned original source code.
  • §

    在每个位置,运行此尝试匹配列表,如果任何一个成功,则短路。它们的顺序决定了优先级:@literalToken 是后备 catch-all。

        i = 0
        while @chunk = code[i..]
          consumed = \
               @identifierToken() or
               @commentToken()    or
               @whitespaceToken() or
               @lineToken()       or
               @stringToken()     or
               @numberToken()     or
               @jsxToken()        or
               @regexToken()      or
               @jsToken()         or
               @literalToken()
  • §

    更新位置。

          [@chunkLine, @chunkColumn, @chunkOffset] = @getLineAndColumnFromChunk consumed
    
          i += consumed
    
          return {@tokens, index: i} if opts.untilBalanced and @ends.length is 0
    
        @closeIndentation()
        @error "missing #{end.tag}", (end.origin ? end)[2] if end = @ends.pop()
        return @tokens if opts.rewrite is off
        (new Rewriter).rewrite @tokens
  • §

    预处理代码以删除前导和尾随空格、回车等。如果我们正在对 literate CoffeeScript 进行词法分析,则通过删除所有未缩进至少四个空格或一个制表符的行来剥离外部 Markdown。

      clean: (code) ->
        thusFar = 0
        if code.charCodeAt(0) is BOM
          code = code.slice 1
          @locationDataCompensations[0] = 1
          thusFar += 1
        if WHITESPACE.test code
          code = "\n#{code}"
          @chunkLine--
          @locationDataCompensations[0] ?= 0
          @locationDataCompensations[0] -= 1
        code = code
          .replace /\r/g, (match, offset) =>
            @locationDataCompensations[thusFar + offset] = 1
            ''
          .replace TRAILING_SPACES, ''
        code = invertLiterate code if @literate
        code
  • §

    词法分析器

  • §
  • §

    匹配标识字面量:变量、关键字、方法名称等。检查以确保 JavaScript 保留字未用作标识符。由于 CoffeeScript 保留了一些在 JavaScript 中允许的关键字,因此我们在这里小心地不要将它们标记为关键字,因此您仍然可以执行 jQuery.is(),即使 is 意味着 ===。

      identifierToken: ->
        inJSXTag = @atJSXTag()
        regex = if inJSXTag then JSX_ATTRIBUTE else IDENTIFIER
        return 0 unless match = regex.exec @chunk
        [input, id, colon] = match
  • §

    保留 id 的长度以用于位置数据

        idLength = id.length
        poppedToken = undefined
        if id is 'own' and @tag() is 'FOR'
          @token 'OWN', id
          return id.length
        if id is 'from' and @tag() is 'YIELD'
          @token 'FROM', id
          return id.length
        if id is 'as' and @seenImport
          if @value() is '*'
            @tokens[@tokens.length - 1][0] = 'IMPORT_ALL'
          else if @value(yes) in COFFEE_KEYWORDS
            prev = @prev()
            [prev[0], prev[1]] = ['IDENTIFIER', @value(yes)]
          if @tag() in ['DEFAULT', 'IMPORT_ALL', 'IDENTIFIER']
            @token 'AS', id
            return id.length
        if id is 'as' and @seenExport
          if @tag() in ['IDENTIFIER', 'DEFAULT']
            @token 'AS', id
            return id.length
          if @value(yes) in COFFEE_KEYWORDS
            prev = @prev()
            [prev[0], prev[1]] = ['IDENTIFIER', @value(yes)]
            @token 'AS', id
            return id.length
        if id is 'default' and @seenExport and @tag() in ['EXPORT', 'AS']
          @token 'DEFAULT', id
          return id.length
        if id is 'assert' and (@seenImport or @seenExport) and @tag() is 'STRING'
          @token 'ASSERT', id
          return id.length
        if id is 'do' and regExSuper = /^(\s*super)(?!\(\))/.exec @chunk[3...]
          @token 'SUPER', 'super'
          @token 'CALL_START', '('
          @token 'CALL_END', ')'
          [input, sup] = regExSuper
          return sup.length + 3
    
        prev = @prev()
    
        tag =
          if colon or prev? and
             (prev[0] in ['.', '?.', '::', '?::'] or
             not prev.spaced and prev[0] is '@')
            'PROPERTY'
          else
            'IDENTIFIER'
    
        tokenData = {}
        if tag is 'IDENTIFIER' and (id in JS_KEYWORDS or id in COFFEE_KEYWORDS) and
           not (@exportSpecifierList and id in COFFEE_KEYWORDS)
          tag = id.toUpperCase()
          if tag is 'WHEN' and @tag() in LINE_BREAK
            tag = 'LEADING_WHEN'
          else if tag is 'FOR'
            @seenFor = {endsLength: @ends.length}
          else if tag is 'UNLESS'
            tag = 'IF'
          else if tag is 'IMPORT'
            @seenImport = yes
          else if tag is 'EXPORT'
            @seenExport = yes
          else if tag in UNARY
            tag = 'UNARY'
          else if tag in RELATION
            if tag isnt 'INSTANCEOF' and @seenFor
              tag = 'FOR' + tag
              @seenFor = no
            else
              tag = 'RELATION'
              if @value() is '!'
                poppedToken = @tokens.pop()
                tokenData.invert = poppedToken.data?.original ? poppedToken[1]
        else if tag is 'IDENTIFIER' and @seenFor and id is 'from' and
           isForFrom(prev)
          tag = 'FORFROM'
          @seenFor = no
  • §

    在尝试使用 get 或 set 作为关键字时抛出错误,或者 CoffeeScript 通常会将其解释为对名为 get 或 set 的函数的调用,即 get({foo: function () {}})。

        else if tag is 'PROPERTY' and prev
          if prev.spaced and prev[0] in CALLABLE and /^[gs]et$/.test(prev[1]) and
             @tokens.length > 1 and @tokens[@tokens.length - 2][0] not in ['.', '?.', '@']
            @error "'#{prev[1]}' cannot be used as a keyword, or as a function call
            without parentheses", prev[2]
          else if prev[0] is '.' and @tokens.length > 1 and (prevprev = @tokens[@tokens.length - 2])[0] is 'UNARY' and prevprev[1] is 'new'
            prevprev[0] = 'NEW_TARGET'
          else if prev[0] is '.' and @tokens.length > 1 and (prevprev = @tokens[@tokens.length - 2])[0] is 'IMPORT' and prevprev[1] is 'import'
            @seenImport = no
            prevprev[0] = 'IMPORT_META'
          else if @tokens.length > 2
            prevprev = @tokens[@tokens.length - 2]
            if prev[0] in ['@', 'THIS'] and prevprev and prevprev.spaced and
               /^[gs]et$/.test(prevprev[1]) and
               @tokens[@tokens.length - 3][0] not in ['.', '?.', '@']
              @error "'#{prevprev[1]}' cannot be used as a keyword, or as a
              function call without parentheses", prevprev[2]
    
        if tag is 'IDENTIFIER' and id in RESERVED and not inJSXTag
          @error "reserved word '#{id}'", length: id.length
    
        unless tag is 'PROPERTY' or @exportSpecifierList or @importSpecifierList
          if id in COFFEE_ALIASES
            alias = id
            id = COFFEE_ALIAS_MAP[id]
            tokenData.original = alias
          tag = switch id
            when '!'                 then 'UNARY'
            when '==', '!='          then 'COMPARE'
            when 'true', 'false'     then 'BOOL'
            when 'break', 'continue', \
                 'debugger'          then 'STATEMENT'
            when '&&', '||'          then id
            else  tag
    
        tagToken = @token tag, id, length: idLength, data: tokenData
        tagToken.origin = [tag, alias, tagToken[2]] if alias
        if poppedToken
          [tagToken[2].first_line, tagToken[2].first_column, tagToken[2].range[0]] =
            [poppedToken[2].first_line, poppedToken[2].first_column, poppedToken[2].range[0]]
        if colon
          colonOffset = input.lastIndexOf if inJSXTag then '=' else ':'
          colonToken = @token ':', ':', offset: colonOffset
          colonToken.jsxColon = yes if inJSXTag # used by rewriter
        if inJSXTag and tag is 'IDENTIFIER' and prev[0] isnt ':'
          @token ',', ',', length: 0, origin: tagToken, generated: yes
    
        input.length
  • §

    匹配数字,包括小数、十六进制和指数表示法。小心不要干扰正在进行的范围。

      numberToken: ->
        return 0 unless match = NUMBER.exec @chunk
    
        number = match[0]
        lexedLength = number.length
    
        switch
          when /^0[BOX]/.test number
            @error "radix prefix in '#{number}' must be lowercase", offset: 1
          when /^0\d*[89]/.test number
            @error "decimal literal '#{number}' must not be prefixed with '0'", length: lexedLength
          when /^0\d+/.test number
            @error "octal literal '#{number}' must be prefixed with '0o'", length: lexedLength
    
        parsedValue = parseNumber number
        tokenData = {parsedValue}
    
        tag = if parsedValue is Infinity then 'INFINITY' else 'NUMBER'
        if tag is 'INFINITY'
          tokenData.original = number
        @token tag, number,
          length: lexedLength
          data: tokenData
        lexedLength
  • §

    匹配字符串,包括多行字符串,以及 heredocs,无论是否包含插值。

      stringToken: ->
        [quote] = STRING_START.exec(@chunk) || []
        return 0 unless quote
  • §

    如果前面的 token 是 from 并且这是一个导入或导出语句,则正确标记 from。

        prev = @prev()
        if prev and @value() is 'from' and (@seenImport or @seenExport)
          prev[0] = 'FROM'
    
        regex = switch quote
          when "'"   then STRING_SINGLE
          when '"'   then STRING_DOUBLE
          when "'''" then HEREDOC_SINGLE
          when '"""' then HEREDOC_DOUBLE
    
        {tokens, index: end} = @matchWithInterpolations regex, quote
    
        heredoc = quote.length is 3
        if heredoc
  • §

    找到最小的缩进。它将在以后从所有行中删除。

          indent = null
          doc = (token[1] for token, i in tokens when token[0] is 'NEOSTRING').join '#{}'
          while match = HEREDOC_INDENT.exec doc
            attempt = match[1]
            indent = attempt if indent is null or 0 < attempt.length < indent.length
    
        delimiter = quote.charAt(0)
        @mergeInterpolationTokens tokens, {quote, indent, endOffset: end}, (value) =>
          @validateUnicodeCodePointEscapes value, delimiter: quote
    
        if @atJSXTag()
          @token ',', ',', length: 0, origin: @prev, generated: yes
    
        end
  • §

    匹配并消耗注释。注释从 token 流中取出并保存以备后用,以便在解析完所有内容并生成 JavaScript 代码后重新插入输出。

      commentToken: (chunk = @chunk, {heregex, returnCommentTokens = no, offsetInChunk = 0} = {}) ->
        return 0 unless match = chunk.match COMMENT
        [commentWithSurroundingWhitespace, hereLeadingWhitespace, hereComment, hereTrailingWhitespace, lineComment] = match
        contents = null
  • §

    此注释是否在同一行上的代码之后?

        leadingNewline = /^\s*\n+\s*#/.test commentWithSurroundingWhitespace
        if hereComment
          matchIllegal = HERECOMMENT_ILLEGAL.exec hereComment
          if matchIllegal
            @error "block comments cannot contain #{matchIllegal[0]}",
              offset: '###'.length + matchIllegal.index, length: matchIllegal[0].length
  • §

    解析缩进或缩出,就好像此块注释不存在一样。

          chunk = chunk.replace "####{hereComment}###", ''
  • §

    删除前导换行符,如 Rewriter::removeLeadingNewlines,以避免创建不需要的 TERMINATOR token。

          chunk = chunk.replace /^\n+/, ''
          @lineToken {chunk}
  • §

    提取 ### 样式注释的内容并对其进行格式化。

          content = hereComment
          contents = [{
            content
            length: commentWithSurroundingWhitespace.length - hereLeadingWhitespace.length - hereTrailingWhitespace.length
            leadingWhitespace: hereLeadingWhitespace
          }]
        else
  • §

    COMMENT 正则表达式将连续的行注释捕获为一个 token。删除第一个注释之前的任何前导换行符,但保留行注释之间的空行。

          leadingNewlines = ''
          content = lineComment.replace /^(\n*)/, (leading) ->
            leadingNewlines = leading
            ''
          precedingNonCommentLines = ''
          hasSeenFirstCommentLine = no
          contents =
            content.split '\n'
            .map (line, index) ->
              unless line.indexOf('#') > -1
                precedingNonCommentLines += "\n#{line}"
                return
              leadingWhitespace = ''
              content = line.replace /^([ |\t]*)#/, (_, whitespace) ->
                leadingWhitespace = whitespace
                ''
              comment = {
                content
                length: '#'.length + content.length
                leadingWhitespace: "#{unless hasSeenFirstCommentLine then leadingNewlines else ''}#{precedingNonCommentLines}#{leadingWhitespace}"
                precededByBlankLine: !!precedingNonCommentLines
              }
              hasSeenFirstCommentLine = yes
              precedingNonCommentLines = ''
              comment
            .filter (comment) -> comment
    
        getIndentSize = ({leadingWhitespace, nonInitial}) ->
          lastNewlineIndex = leadingWhitespace.lastIndexOf '\n'
          if hereComment? or not nonInitial
            return null unless lastNewlineIndex > -1
          else
            lastNewlineIndex ?= -1
          leadingWhitespace.length - 1 - lastNewlineIndex
        commentAttachments = for {content, length, leadingWhitespace, precededByBlankLine}, i in contents
          nonInitial = i isnt 0
          leadingNewlineOffset = if nonInitial then 1 else 0
          offsetInChunk += leadingNewlineOffset + leadingWhitespace.length
          indentSize = getIndentSize {leadingWhitespace, nonInitial}
          noIndent = not indentSize? or indentSize is -1
          commentAttachment = {
            content
            here: hereComment?
            newLine: leadingNewline or nonInitial # Line comments after the first one start new lines, by definition.
            locationData: @makeLocationData {offsetInChunk, length}
            precededByBlankLine
            indentSize
            indented:  not noIndent and indentSize > @indent
            outdented: not noIndent and indentSize < @indent
          }
          commentAttachment.heregex = yes if heregex
          offsetInChunk += length
          commentAttachment
    
        prev = @prev()
        unless prev
  • §

    如果没有先前的 token,则创建一个占位符 token 以将此注释附加到该 token;并跟随一个换行符。

          commentAttachments[0].newLine = yes
          @lineToken chunk: @chunk[commentWithSurroundingWhitespace.length..], offset: commentWithSurroundingWhitespace.length # Set the indent.
          placeholderToken = @makeToken 'JS', '', offset: commentWithSurroundingWhitespace.length, generated: yes
          placeholderToken.comments = commentAttachments
          @tokens.push placeholderToken
          @newlineToken commentWithSurroundingWhitespace.length
        else
          attachCommentsToNode commentAttachments, prev
    
        return commentAttachments if returnCommentTokens
        commentWithSurroundingWhitespace.length
  • §

    匹配直接通过反引号插入源代码中的 JavaScript 插值。

      jsToken: ->
        return 0 unless @chunk.charAt(0) is '`' and
          (match = (matchedHere = HERE_JSTOKEN.exec(@chunk)) or JSTOKEN.exec(@chunk))
  • §

    将转义的反引号转换为反引号,并将转义的反引号之前的转义反斜杠转换为反斜杠

        script = match[1]
        {length} = match[0]
        @token 'JS', script, {length, data: {here: !!matchedHere}}
        length
  • §

    匹配正则表达式字面量,以及多行扩展正则表达式。对正则表达式的词法分析很难与除法区分开来,因此我们借鉴了 JavaScript 和 Ruby 中的一些基本启发式方法。

      regexToken: ->
        switch
          when match = REGEX_ILLEGAL.exec @chunk
            @error "regular expressions cannot begin with #{match[2]}",
              offset: match.index + match[1].length
          when match = @matchWithInterpolations HEREGEX, '///'
            {tokens, index} = match
            comments = []
            while matchedComment = HEREGEX_COMMENT.exec @chunk[0...index]
              {index: commentIndex} = matchedComment
              [fullMatch, leadingWhitespace, comment] = matchedComment
              comments.push {comment, offsetInChunk: commentIndex + leadingWhitespace.length}
            commentTokens = flatten(
              for commentOpts in comments
                @commentToken commentOpts.comment, Object.assign commentOpts, heregex: yes, returnCommentTokens: yes
            )
          when match = REGEX.exec @chunk
            [regex, body, closed] = match
            @validateEscapes body, isRegex: yes, offsetInChunk: 1
            index = regex.length
            prev = @prev()
            if prev
              if prev.spaced and prev[0] in CALLABLE
                return 0 if not closed or POSSIBLY_DIVISION.test regex
              else if prev[0] in NOT_REGEX
                return 0
            @error 'missing / (unclosed regex)' unless closed
          else
            return 0
    
        [flags] = REGEX_FLAGS.exec @chunk[index..]
        end = index + flags.length
        origin = @makeToken 'REGEX', null, length: end
        switch
          when not VALID_FLAGS.test flags
            @error "invalid regular expression flags #{flags}", offset: index, length: flags.length
          when regex or tokens.length is 1
            delimiter = if body then '/' else '///'
            body ?= tokens[0][1]
            @validateUnicodeCodePointEscapes body, {delimiter}
            @token 'REGEX', "/#{body}/#{flags}", {length: end, origin, data: {delimiter}}
          else
            @token 'REGEX_START', '(',    {length: 0, origin, generated: yes}
            @token 'IDENTIFIER', 'RegExp', length: 0, generated: yes
            @token 'CALL_START', '(',      length: 0, generated: yes
            @mergeInterpolationTokens tokens, {double: yes, heregex: {flags}, endOffset: end - flags.length, quote: '///'}, (str) =>
              @validateUnicodeCodePointEscapes str, {delimiter}
            if flags
              @token ',', ',',                    offset: index - 1, length: 0, generated: yes
              @token 'STRING', '"' + flags + '"', offset: index,     length: flags.length
            @token ')', ')',                      offset: end,       length: 0, generated: yes
            @token 'REGEX_END', ')',              offset: end,       length: 0, generated: yes
  • §

    将任何 heregex 注释显式附加到 REGEX/REGEX_END token。

        if commentTokens?.length
          addTokenData @tokens[@tokens.length - 1],
            heregexCommentTokens: commentTokens
    
        end
  • §

    匹配换行符、缩进和缩出,并确定哪个是哪个。如果我们可以检测到当前行继续到下一行,则换行符将被抑制

    elements
      .each( ... )
      .map( ... )
    

    跟踪缩进级别,因为单个缩出 token 可以关闭多个缩进,因此我们需要知道我们碰巧在多远的位置。

      lineToken: ({chunk = @chunk, offset = 0} = {}) ->
        return 0 unless match = MULTI_DENT.exec chunk
        indent = match[0]
    
        prev = @prev()
        backslash = prev?[0] is '\\'
        @seenFor = no unless (backslash or @seenFor?.endsLength < @ends.length) and @seenFor
        @seenImport = no unless (backslash and @seenImport) or @importSpecifierList
        @seenExport = no unless (backslash and @seenExport) or @exportSpecifierList
    
        size = indent.length - 1 - indent.lastIndexOf '\n'
        noNewlines = @unfinished()
    
        newIndentLiteral = if size > 0 then indent[-size..] else ''
        unless /^(.?)\1*$/.exec newIndentLiteral
          @error 'mixed indentation', offset: indent.length
          return indent.length
    
        minLiteralLength = Math.min newIndentLiteral.length, @indentLiteral.length
        if newIndentLiteral[...minLiteralLength] isnt @indentLiteral[...minLiteralLength]
          @error 'indentation mismatch', offset: indent.length
          return indent.length
    
        if size - @continuationLineAdditionalIndent is @indent
          if noNewlines then @suppressNewlines() else @newlineToken offset
          return indent.length
    
        if size > @indent
          if noNewlines
            @continuationLineAdditionalIndent = size - @indent unless backslash
            if @continuationLineAdditionalIndent
              prev.continuationLineIndent = @indent + @continuationLineAdditionalIndent
            @suppressNewlines()
            return indent.length
          unless @tokens.length
            @baseIndent = @indent = size
            @indentLiteral = newIndentLiteral
            return indent.length
          diff = size - @indent + @outdebt
          @token 'INDENT', diff, offset: offset + indent.length - size, length: size
          @indents.push diff
          @ends.push {tag: 'OUTDENT'}
          @outdebt = @continuationLineAdditionalIndent = 0
          @indent = size
          @indentLiteral = newIndentLiteral
        else if size < @baseIndent
          @error 'missing indentation', offset: offset + indent.length
        else
          endsContinuationLineIndentation = @continuationLineAdditionalIndent > 0
          @continuationLineAdditionalIndent = 0
          @outdentToken {moveOut: @indent - size, noNewlines, outdentLength: indent.length, offset, indentSize: size, endsContinuationLineIndentation}
        indent.length
  • §

    记录缩出 token 或多个 token,如果我们碰巧向内移动到多个记录的缩进之外。设置新的 @indent 值。

      outdentToken: ({moveOut, noNewlines, outdentLength = 0, offset = 0, indentSize, endsContinuationLineIndentation}) ->
        decreasedIndent = @indent - moveOut
        while moveOut > 0
          lastIndent = @indents[@indents.length - 1]
          if not lastIndent
            @outdebt = moveOut = 0
          else if @outdebt and moveOut <= @outdebt
            @outdebt -= moveOut
            moveOut   = 0
          else
            dent = @indents.pop() + @outdebt
            if outdentLength and @chunk[outdentLength] in INDENTABLE_CLOSERS
              decreasedIndent -= dent - moveOut
              moveOut = dent
            @outdebt = 0
  • §

    pair 可能调用 outdentToken,因此保留 decreasedIndent

            @pair 'OUTDENT'
            @token 'OUTDENT', moveOut, length: outdentLength, indentSize: indentSize + moveOut - dent
            moveOut -= dent
        @outdebt -= moveOut if dent
        @suppressSemicolons()
    
        unless @tag() is 'TERMINATOR' or noNewlines
          terminatorToken = @token 'TERMINATOR', '\n', offset: offset + outdentLength, length: 0
          terminatorToken.endsContinuationLineIndentation = {preContinuationLineIndent: @indent} if endsContinuationLineIndentation
        @indent = decreasedIndent
        @indentLiteral = @indentLiteral[...decreasedIndent]
        this
  • §

    匹配并消耗无意义的空格。将前一个 token 标记为“spaced”,因为在某些情况下,这会产生影响。

      whitespaceToken: ->
        return 0 unless (match = WHITESPACE.exec @chunk) or
                        (nline = @chunk.charAt(0) is '\n')
        prev = @prev()
        prev[if match then 'spaced' else 'newLine'] = true if prev
        if match then match[0].length else 0
  • §

    生成一个换行符 token。连续的换行符将合并在一起。

      newlineToken: (offset) ->
        @suppressSemicolons()
        @token 'TERMINATOR', '\n', {offset, length: 0} unless @tag() is 'TERMINATOR'
        this
  • §

    在行尾使用 \ 来抑制换行符。一旦完成工作,斜杠就会在这里被删除。

      suppressNewlines: ->
        prev = @prev()
        if prev[1] is '\\'
          if prev.comments and @tokens.length > 1
  • §

    @tokens.length 应该至少为 2(一些代码,然后是 \)。如果某些东西在没有任何东西之后放了一个 \,他们应该失去任何跟随它的注释。

            attachCommentsToNode prev.comments, @tokens[@tokens.length - 2]
          @tokens.pop()
        this
    
      jsxToken: ->
        firstChar = @chunk[0]
  • §

    检查前一个 token 以检测属性是否已扩展。

        prevChar = if @tokens.length > 0 then @tokens[@tokens.length - 1][0] else ''
        if firstChar is '<'
          match = JSX_IDENTIFIER.exec(@chunk[1...]) or JSX_FRAGMENT_IDENTIFIER.exec(@chunk[1...])
          return 0 unless match and (
            @jsxDepth > 0 or
  • §

    不是无空格比较的右侧(即 a<b)。

            not (prev = @prev()) or
            prev.spaced or
            prev[0] not in COMPARABLE_LEFT_SIDE
          )
          [input, id] = match
          fullId = id
          if '.' in id
            [id, properties...] = id.split '.'
          else
            properties = []
          tagToken = @token 'JSX_TAG', id,
            length: id.length + 1
            data:
              openingBracketToken: @makeToken '<', '<'
              tagNameToken: @makeToken 'IDENTIFIER', id, offset: 1
          offset = id.length + 1
          for property in properties
            @token '.', '.', {offset}
            offset += 1
            @token 'PROPERTY', property, {offset}
            offset += property.length
          @token 'CALL_START', '(', generated: yes
          @token '[', '[', generated: yes
          @ends.push {tag: '/>', origin: tagToken, name: id, properties}
          @jsxDepth++
          return fullId.length + 1
        else if jsxTag = @atJSXTag()
          if @chunk[...2] is '/>' # Self-closing tag.
            @pair '/>'
            @token ']', ']',
              length: 2
              generated: yes
            @token 'CALL_END', ')',
              length: 2
              generated: yes
              data:
                selfClosingSlashToken: @makeToken '/', '/'
                closingBracketToken: @makeToken '>', '>', offset: 1
            @jsxDepth--
            return 2
          else if firstChar is '{'
            if prevChar is ':'
  • §

    此 token 表示 JSX 属性值的开始,该属性值是一个表达式(例如 <div a={b} /> 中的 {b})。我们的语法将表达式的开头表示为 ( token,因此将其转换为显示为 { 的 ( token。

              token = @token '(', '{'
              @jsxObjAttribute[@jsxDepth] = no
  • §

    将属性名称标记为 JSX

              addTokenData @tokens[@tokens.length - 3],
                jsx: yes
            else
              token = @token '{', '{'
              @jsxObjAttribute[@jsxDepth] = yes
            @ends.push {tag: '}', origin: token}
            return 1
          else if firstChar is '>' # end of opening tag
  • §

    忽略标签内的终止符。

            {origin: openingTagToken} = @pair '/>' # As if the current tag was self-closing.
            @token ']', ']',
              generated: yes
              data:
                closingBracketToken: @makeToken '>', '>'
            @token ',', 'JSX_COMMA', generated: yes
            {tokens, index: end} =
              @matchWithInterpolations INSIDE_JSX, '>', '</', JSX_INTERPOLATION
            @mergeInterpolationTokens tokens, {endOffset: end, jsx: yes}, (value) =>
              @validateUnicodeCodePointEscapes value, delimiter: '>'
            match = JSX_IDENTIFIER.exec(@chunk[end...]) or JSX_FRAGMENT_IDENTIFIER.exec(@chunk[end...])
            if not match or match[1] isnt "#{jsxTag.name}#{(".#{property}" for property in jsxTag.properties).join ''}"
              @error "expected corresponding JSX closing tag for #{jsxTag.name}",
                jsxTag.origin.data.tagNameToken[2]
            [, fullTagName] = match
            afterTag = end + fullTagName.length
            if @chunk[afterTag] isnt '>'
              @error "missing closing > after tag name", offset: afterTag, length: 1
  • §

    -2/+2 用于打开 </ 和 +1 用于关闭 >。

            endToken = @token 'CALL_END', ')',
              offset: end - 2
              length: fullTagName.length + 3
              generated: yes
              data:
                closingTagOpeningBracketToken: @makeToken '<', '<', offset: end - 2
                closingTagSlashToken: @makeToken '/', '/', offset: end - 1
  • §

    TODO:复杂标签名称的单个 token?例如 < / A . B >

                closingTagNameToken: @makeToken 'IDENTIFIER', fullTagName, offset: end
                closingTagClosingBracketToken: @makeToken '>', '>', offset: end + fullTagName.length
  • §

    使关闭标签位置数据更容易被语法访问

            addTokenData openingTagToken, endToken.data
            @jsxDepth--
            return afterTag + 1
          else
            return 0
        else if @atJSXTag 1
          if firstChar is '}'
            @pair firstChar
            if @jsxObjAttribute[@jsxDepth]
              @token '}', '}'
              @jsxObjAttribute[@jsxDepth] = no
            else
              @token ')', '}'
            @token ',', ',', generated: yes
            return 1
          else
            return 0
        else
          return 0
    
      atJSXTag: (depth = 0) ->
        return no if @jsxDepth is 0
        i = @ends.length - 1
        i-- while @ends[i]?.tag is 'OUTDENT' or depth-- > 0 # Ignore indents.
        last = @ends[i]
        last?.tag is '/>' and last
  • §

    我们将所有其他单个字符视为一个 token。例如:( ) , . ! 多字符运算符也是字面量 token,以便 Jison 可以分配适当的运算顺序。这里有一些我们专门标记的符号。; 和换行符都被视为 TERMINATOR,我们将指示方法调用的括号与普通括号区分开来,等等。

      literalToken: ->
        if match = OPERATOR.exec @chunk
          [value] = match
          @tagParameters() if CODE.test value
        else
          value = @chunk.charAt 0
        tag  = value
        prev = @prev()
    
        if prev and value in ['=', COMPOUND_ASSIGN...]
          skipToken = false
          if value is '=' and prev[1] in ['||', '&&'] and not prev.spaced
            prev[0] = 'COMPOUND_ASSIGN'
            prev[1] += '='
            prev.data.original += '=' if prev.data?.original
            prev[2].range = [
              prev[2].range[0]
              prev[2].range[1] + 1
            ]
            prev[2].last_column += 1
            prev[2].last_column_exclusive += 1
            prev = @tokens[@tokens.length - 2]
            skipToken = true
          if prev and prev[0] isnt 'PROPERTY'
            origin = prev.origin ? prev
            message = isUnassignable prev[1], origin[1]
            @error message, origin[2] if message
          return value.length if skipToken
    
        if value is '(' and prev?[0] is 'IMPORT'
          prev[0] = 'DYNAMIC_IMPORT'
    
        if value is '{' and @seenImport
          @importSpecifierList = yes
        else if @importSpecifierList and value is '}'
          @importSpecifierList = no
        else if value is '{' and prev?[0] is 'EXPORT'
          @exportSpecifierList = yes
        else if @exportSpecifierList and value is '}'
          @exportSpecifierList = no
    
        if value is ';'
          @error 'unexpected ;' if prev?[0] in ['=', UNFINISHED...]
          @seenFor = @seenImport = @seenExport = no
          tag = 'TERMINATOR'
        else if value is '*' and prev?[0] is 'EXPORT'
          tag = 'EXPORT_ALL'
        else if value in MATH            then tag = 'MATH'
        else if value in COMPARE         then tag = 'COMPARE'
        else if value in COMPOUND_ASSIGN then tag = 'COMPOUND_ASSIGN'
        else if value in UNARY           then tag = 'UNARY'
        else if value in UNARY_MATH      then tag = 'UNARY_MATH'
        else if value in SHIFT           then tag = 'SHIFT'
        else if value is '?' and prev?.spaced then tag = 'BIN?'
        else if prev
          if value is '(' and not prev.spaced and prev[0] in CALLABLE
            prev[0] = 'FUNC_EXIST' if prev[0] is '?'
            tag = 'CALL_START'
          else if value is '[' and ((prev[0] in INDEXABLE and not prev.spaced) or
             (prev[0] is '::')) # `.prototype` can’t be a method you can call.
            tag = 'INDEX_START'
            switch prev[0]
              when '?'  then prev[0] = 'INDEX_SOAK'
        token = @makeToken tag, value
        switch value
          when '(', '{', '[' then @ends.push {tag: INVERSES[value], origin: token}
          when ')', '}', ']' then @pair value
        @tokens.push @makeToken tag, value
        value.length
  • §

    Token 操作器

  • §
  • §

    我们语法中的一个歧义来源是函数定义中的参数列表与函数调用中的参数列表。向后走,专门标记参数,以便解析器更容易。

      tagParameters: ->
        return @tagDoIife() if @tag() isnt ')'
        stack = []
        {tokens} = this
        i = tokens.length
        paramEndToken = tokens[--i]
        paramEndToken[0] = 'PARAM_END'
        while tok = tokens[--i]
          switch tok[0]
            when ')'
              stack.push tok
            when '(', 'CALL_START'
              if stack.length then stack.pop()
              else if tok[0] is '('
                tok[0] = 'PARAM_START'
                return @tagDoIife i - 1
              else
                paramEndToken[0] = 'CALL_END'
                return this
        this
  • §

    将 do 后跟一个函数与 do 后跟例如一个标识符区分开来,以允许不同的语法优先级

      tagDoIife: (tokenIndex) ->
        tok = @tokens[tokenIndex ? @tokens.length - 1]
        return this unless tok?[0] is 'DO'
        tok[0] = 'DO_IIFE'
        this
  • §

    在文件末尾关闭所有剩余的打开块。

      closeIndentation: ->
        @outdentToken moveOut: @indent, indentSize: 0
  • §

    匹配定界 token 的内容,并使用 Ruby 式的符号扩展其中的变量和表达式,以替换任意表达式。

    "Hello #{name.capitalize()}."
    

    如果遇到插值,此方法将递归地创建一个新的词法分析器并进行 token 化,直到 #{ 的 { 与 } 平衡。

    • regex 匹配 token 的内容(但不匹配 delimiter,如果需要插值,也不匹配 #{)。
    • delimiter 是 token 的定界符。例如 '、"、'''、""" 和 ///。
    • closingDelimiter 仅在 JSX 中与 delimiter 不同
    • interpolators 匹配插值的开始,对于 JSX,它既是 { 又是 <(即嵌套的 JSX 标签)

    此方法允许我们在字符串中的插值中拥有字符串,无限循环。

      matchWithInterpolations: (regex, delimiter, closingDelimiter = delimiter, interpolators = /^#\{/) ->
        tokens = []
        offsetInChunk = delimiter.length
        return null unless @chunk[...offsetInChunk] is delimiter
        str = @chunk[offsetInChunk..]
        loop
          [strPart] = regex.exec str
    
          @validateEscapes strPart, {isRegex: delimiter.charAt(0) is '/', offsetInChunk}
  • §

    推送一个假的 'NEOSTRING' token,它将在以后转换为真正的字符串。

          tokens.push @makeToken 'NEOSTRING', strPart, offset: offsetInChunk
    
          str = str[strPart.length..]
          offsetInChunk += strPart.length
    
          break unless match = interpolators.exec str
          [interpolator] = match
  • §

    删除 #{ 中的 #。

          interpolationOffset = interpolator.length - 1
          [line, column, offset] = @getLineAndColumnFromChunk offsetInChunk + interpolationOffset
          rest = str[interpolationOffset..]
          {tokens: nested, index} =
            new Lexer().tokenize rest, {line, column, offset, untilBalanced: on, @locationDataCompensations}
  • §

    考虑 #{ 中的 #。

          index += interpolationOffset
    
          braceInterpolator = str[index - 1] is '}'
          if braceInterpolator
  • §

    将前导和尾随 { 和 } 转换为括号。不必要的括号将在以后删除。

            [open, ..., close] = nested
            open[0]  = 'INTERPOLATION_START'
            open[1]  = '('
            open[2].first_column -= interpolationOffset
            open[2].range = [
              open[2].range[0] - interpolationOffset
              open[2].range[1]
            ]
            close[0]  = 'INTERPOLATION_END'
            close[1] = ')'
            close.origin = ['', 'end of interpolation', close[2]]
  • §

    删除前导 'TERMINATOR'(如果有)。

          nested.splice 1, 1 if nested[1]?[0] is 'TERMINATOR'
  • §

    删除尾随 'INDENT'/'OUTDENT' 对(如果有)。

          nested.splice -3, 2 if nested[nested.length - 3]?[0] is 'INDENT' and nested[nested.length - 2][0] is 'OUTDENT'
    
          unless braceInterpolator
  • §

    我们不使用 { 和 },因此改为包装插值的 token。

            open = @makeToken 'INTERPOLATION_START', '(', offset: offsetInChunk,         length: 0, generated: yes
            close = @makeToken 'INTERPOLATION_END', ')',  offset: offsetInChunk + index, length: 0, generated: yes
            nested = [open, nested..., close]
  • §

    推送一个假的 'TOKENS' token,它将在以后转换为真正的 token。

          tokens.push ['TOKENS', nested]
    
          str = str[index..]
          offsetInChunk += index
    
        unless str[...closingDelimiter.length] is closingDelimiter
          @error "missing #{closingDelimiter}", length: delimiter.length
    
        {tokens, index: offsetInChunk + closingDelimiter.length}
  • §

    将假 token 类型 'TOKENS' 和 'NEOSTRING' 的数组 tokens(如 matchWithInterpolations 返回)合并到 token 流中。'NEOSTRING' 的值首先使用 fn 转换,然后使用 options 转换为字符串。

      mergeInterpolationTokens: (tokens, options, fn) ->
        {quote, indent, double, heregex, endOffset, jsx} = options
    
        if tokens.length > 1
          lparen = @token 'STRING_START', '(', length: quote?.length ? 0, data: {quote}, generated: not quote?.length
    
        firstIndex = @tokens.length
        $ = tokens.length - 1
        for token, i in tokens
          [tag, value] = token
          switch tag
            when 'TOKENS'
  • §

    此插值中存在注释(以及其他内容)。

              if value.length is 2 and (value[0].comments or value[1].comments)
                placeholderToken = @makeToken 'JS', '', generated: yes
  • §

    使用与第一个括号相同的 location 数据。

                placeholderToken[2] = value[0][2]
                for val in value when val.comments
                  placeholderToken.comments ?= []
                  placeholderToken.comments.push val.comments...
                value.splice 1, 0, placeholderToken
  • §

    推送假 'TOKENS' token 中的所有 token。这些已经具有合理的 location 数据。

              locationToken = value[0]
              tokensToPush = value
            when 'NEOSTRING'
  • §

    将 'NEOSTRING' 转换为 'STRING'。

              converted = fn.call this, token[1], i
              addTokenData token, initialChunk: yes if i is 0
              addTokenData token, finalChunk: yes   if i is $
              addTokenData token, {indent, quote, double}
              addTokenData token, {heregex} if heregex
              addTokenData token, {jsx} if jsx
              token[0] = 'STRING'
              token[1] = '"' + converted + '"'
              if tokens.length is 1 and quote?
                token[2].first_column -= quote.length
                if token[1].substr(-2, 1) is '\n'
                  token[2].last_line += 1
                  token[2].last_column = quote.length - 1
                else
                  token[2].last_column += quote.length
                  token[2].last_column -= 1 if token[1].length is 2
                token[2].last_column_exclusive += quote.length
                token[2].range = [
                  token[2].range[0] - quote.length
                  token[2].range[1] + quote.length
                ]
              locationToken = token
              tokensToPush = [token]
          @tokens.push tokensToPush...
    
        if lparen
          [..., lastToken] = tokens
          lparen.origin = ['STRING', null,
            first_line:            lparen[2].first_line
            first_column:          lparen[2].first_column
            last_line:             lastToken[2].last_line
            last_column:           lastToken[2].last_column
            last_line_exclusive:   lastToken[2].last_line_exclusive
            last_column_exclusive: lastToken[2].last_column_exclusive
            range: [
              lparen[2].range[0]
              lastToken[2].range[1]
            ]
          ]
          lparen[2] = lparen.origin[2] unless quote?.length
          rparen = @token 'STRING_END', ')', offset: endOffset - (quote ? '').length, length: quote?.length ? 0, generated: not quote?.length
  • §

    将关闭 token 配对,确保所有列出的 token 对在整个 token 流过程中都正确平衡。

      pair: (tag) ->
        [..., prev] = @ends
        unless tag is wanted = prev?.tag
          @error "unmatched #{tag}" unless 'OUTDENT' is wanted
  • §

    自动关闭 INDENT 以支持以下语法

    el.click((event) ->
      el.hide())
    
          [..., lastIndent] = @indents
          @outdentToken moveOut: lastIndent, noNewlines: true
          return @pair tag
        @ends.pop()
  • §

    助手

  • §
  • §

    补偿我们最初剥离的内容(例如回车符),以便 location 数据相对于原始源文件保持准确。

      getLocationDataCompensation: (start, end) ->
        totalCompensation = 0
        initialEnd = end
        current = start
        while current <= end
          break if current is end and start isnt initialEnd
          compensation = @locationDataCompensations[current]
          if compensation?
            totalCompensation += compensation
            end += compensation
          current++
        return totalCompensation
  • §

    从当前块中的偏移量返回行号和列号。

    offset 是 @chunk 中的字符数。

      getLineAndColumnFromChunk: (offset) ->
        compensation = @getLocationDataCompensation @chunkOffset, @chunkOffset + offset
    
        if offset is 0
          return [@chunkLine, @chunkColumn + compensation, @chunkOffset + compensation]
    
        if offset >= @chunk.length
          string = @chunk
        else
          string = @chunk[..offset-1]
    
        lineCount = count string, '\n'
    
        column = @chunkColumn
        if lineCount > 0
          [..., lastLine] = string.split '\n'
          column = lastLine.length
          previousLinesCompensation = @getLocationDataCompensation @chunkOffset, @chunkOffset + offset - column
  • §

    不要重新补偿最初插入的换行符。

          previousLinesCompensation = 0 if previousLinesCompensation < 0
          columnCompensation = @getLocationDataCompensation(
            @chunkOffset + offset + previousLinesCompensation - column
            @chunkOffset + offset + previousLinesCompensation
          )
        else
          column += string.length
          columnCompensation = compensation
    
        [@chunkLine + lineCount, column + columnCompensation, @chunkOffset + offset + compensation]
    
      makeLocationData: ({ offsetInChunk, length }) ->
        locationData = range: []
        [locationData.first_line, locationData.first_column, locationData.range[0]] =
          @getLineAndColumnFromChunk offsetInChunk
  • §

    对于最终偏移量,使用 length - 1 - 我们提供 last_line 和 last_column,因此如果 last_column == first_column,那么我们正在查看长度为 1 的字符。

        lastCharacter = if length > 0 then (length - 1) else 0
        [locationData.last_line, locationData.last_column, endOffset] =
          @getLineAndColumnFromChunk offsetInChunk + lastCharacter
        [locationData.last_line_exclusive, locationData.last_column_exclusive] =
          @getLineAndColumnFromChunk offsetInChunk + lastCharacter + (if length > 0 then 1 else 0)
        locationData.range[1] = if length > 0 then endOffset + 1 else endOffset
    
        locationData
  • §

    与 token 相同,只是它只返回 token,而不将其添加到结果中。

      makeToken: (tag, value, {offset: offsetInChunk = 0, length = value.length, origin, generated, indentSize} = {}) ->
        token = [tag, value, @makeLocationData {offsetInChunk, length}]
        token.origin = origin if origin
        token.generated = yes if generated
        token.indentSize = indentSize if indentSize?
        token
  • §

    将 token 添加到结果中。offset 是当前 @chunk 中 token 开始的偏移量。length 是 @chunk 中 token 的长度,在偏移量之后。如果未指定,则将使用 value 的长度。

    返回新的 token。

      token: (tag, value, {offset, length, origin, data, generated, indentSize} = {}) ->
        token = @makeToken tag, value, {offset, length, origin, generated, indentSize}
        addTokenData token, data if data
        @tokens.push token
        token
  • §

    查看 token 流中的最后一个标签。

      tag: ->
        [..., token] = @tokens
        token?[0]
  • §

    查看 token 流中的最后一个值。

      value: (useOrigin = no) ->
        [..., token] = @tokens
        if useOrigin and token?.origin?
          token.origin[1]
        else
          token?[1]
  • §

    获取 token 流中的前一个 token。

      prev: ->
        @tokens[@tokens.length - 1]
  • §

    我们是否处于未完成的表达式的中间?

      unfinished: ->
        LINE_CONTINUER.test(@chunk) or
        @tag() in UNFINISHED
    
      validateUnicodeCodePointEscapes: (str, options) ->
        replaceUnicodeCodePointEscapes str, merge options, {@error}
  • §

    验证字符串和正则表达式中的转义。

      validateEscapes: (str, options = {}) ->
        invalidEscapeRegex =
          if options.isRegex
            REGEX_INVALID_ESCAPE
          else
            STRING_INVALID_ESCAPE
        match = invalidEscapeRegex.exec str
        return unless match
        [[], before, octal, hex, unicodeCodePoint, unicode] = match
        message =
          if octal
            "octal escape sequences are not allowed"
          else
            "invalid escape sequence"
        invalidEscape = "\\#{octal or hex or unicodeCodePoint or unicode}"
        @error "#{message} #{invalidEscape}",
          offset: (options.offsetInChunk ? 0) + match.index + before.length
          length: invalidEscape.length
    
      suppressSemicolons: ->
        while @value() is ';'
          @tokens.pop()
          @error 'unexpected ;' if @prev()?[0] in ['=', UNFINISHED...]
  • §

    在当前块中的给定偏移量或 token 的位置(token[2])处抛出错误。

      error: (message, options = {}) =>
        location =
          if 'first_line' of options
            options
          else
            [first_line, first_column] = @getLineAndColumnFromChunk options.offset ? 0
            {first_line, first_column, last_column: first_column + (options.length ? 1) - 1}
        throwSyntaxError message, location
  • §

    辅助函数

  • §
    
    isUnassignable = (name, displayName = name) -> switch
      when name in [JS_KEYWORDS..., COFFEE_KEYWORDS...]
        "keyword '#{displayName}' can't be assigned"
      when name in STRICT_PROSCRIBED
        "'#{displayName}' can't be assigned"
      when name in RESERVED
        "reserved word '#{displayName}' can't be assigned"
      else
        false
    
    exports.isUnassignable = isUnassignable
  • §

    from 不是 CoffeeScript 关键字,但在 import 和 export 语句(在上面处理)以及 for 循环的声明行中,它的行为类似于关键字。尝试检测 from 是一个变量标识符还是这个“有时”关键字。

    isForFrom = (prev) ->
  • §

    for i from iterable

      if prev[0] is 'IDENTIFIER'
        yes
  • §

    for from…

      else if prev[0] is 'FOR'
        no
  • §

    for {from}…、for [from]…、for {a, from}…、for {a: from}…

      else if prev[1] in ['{', '[', ',', ':']
        no
      else
        yes
    
    addTokenData = (token, data) ->
      Object.assign (token.data ?= {}), data
  • §

    常量

  • §
  • §

    CoffeeScript 与 JavaScript 共享的关键字。

    JS_KEYWORDS = [
      'true', 'false', 'null', 'this'
      'new', 'delete', 'typeof', 'in', 'instanceof'
      'return', 'throw', 'break', 'continue', 'debugger', 'yield', 'await'
      'if', 'else', 'switch', 'for', 'while', 'do', 'try', 'catch', 'finally'
      'class', 'extends', 'super'
      'import', 'export', 'default'
    ]
  • §

    仅 CoffeeScript 的关键字。

    COFFEE_KEYWORDS = [
      'undefined', 'Infinity', 'NaN'
      'then', 'unless', 'until', 'loop', 'of', 'by', 'when'
    ]
    
    COFFEE_ALIAS_MAP =
      and  : '&&'
      or   : '||'
      is   : '=='
      isnt : '!='
      not  : '!'
      yes  : 'true'
      no   : 'false'
      on   : 'true'
      off  : 'false'
    
    COFFEE_ALIASES  = (key for key of COFFEE_ALIAS_MAP)
    COFFEE_KEYWORDS = COFFEE_KEYWORDS.concat COFFEE_ALIASES
  • §

    JavaScript 保留但未使用或由 CoffeeScript 在内部使用的关键字列表。当遇到这些关键字时,我们会抛出错误,以避免在运行时出现 JavaScript 错误。

    RESERVED = [
      'case', 'function', 'var', 'void', 'with', 'const', 'let', 'enum'
      'native', 'implements', 'interface', 'package', 'private'
      'protected', 'public', 'static'
    ]
    
    STRICT_PROSCRIBED = ['arguments', 'eval']
  • §

    JavaScript 关键字和保留字的超集,这些关键字和保留字都不能用作标识符或属性。

    exports.JS_FORBIDDEN = JS_KEYWORDS.concat(RESERVED).concat(STRICT_PROSCRIBED)
  • §

    令人讨厌的 Microsoft 疯狂行为(也称为 BOM)的字符代码。

    BOM = 65279
  • §

    Token 匹配正则表达式。

    IDENTIFIER = /// ^
      (?!\d)
      ( (?: (?!\s)[$\w\x7f-\uffff] )+ )
      ( [^\n\S]* : (?!:) )?  # Is this a property name?
    ///
  • §

    类似于 IDENTIFIER,但包含 -。

    JSX_IDENTIFIER_PART = /// (?: (?!\s)[\-$\w\x7f-\uffff] )+ ///.source
  • §

    在 https://facebook.github.io/jsx/ 规范中,JSXElementName 可以是 JSXIdentifier、JSXNamespacedName(JSXIdentifier : JSXIdentifier)或 JSXMemberExpression(两个或多个由 . 连接的 JSXIdentifier)。

    JSX_IDENTIFIER = /// ^
      (?![\d<]) # Must not start with `<`.
      ( #{JSX_IDENTIFIER_PART}
        (?: \s* : \s* #{JSX_IDENTIFIER_PART}       # JSXNamespacedName
        | (?: \s* \. \s* #{JSX_IDENTIFIER_PART} )+ # JSXMemberExpression
        )? )
    ///
  • §

    片段:<></>

    JSX_FRAGMENT_IDENTIFIER = /// ^
      ()> # Ends immediately with `>`.
    ///
  • §

    在 https://facebook.github.io/jsx/ 规范中,JSXAttributeName 可以是 JSXIdentifier 或 JSXNamespacedName,即 JSXIdentifier : JSXIdentifier。

    JSX_ATTRIBUTE = /// ^
      (?!\d)
      ( #{JSX_IDENTIFIER_PART}
        (?: \s* : \s* #{JSX_IDENTIFIER_PART}       # JSXNamespacedName
        )? )
      ( [^\S]* = (?!=) )?  # Is this an attribute with a value?
    ///
    
    NUMBER     = ///
      ^ 0b[01](?:_?[01])*n?                         | # binary
      ^ 0o[0-7](?:_?[0-7])*n?                       | # octal
      ^ 0x[\da-f](?:_?[\da-f])*n?                   | # hex
      ^ \d+(?:_\d+)*n                               | # decimal bigint
      ^ (?:\d+(?:_\d+)*)?      \.? \d+(?:_\d+)*       # decimal
                         (?:e[+-]? \d+(?:_\d+)* )?
    
  • §

    十进制数,不支持数字字面量分隔符,供参考:\d*.?\d+ (?:e[+-]?\d+)?

    ///i
    
    OPERATOR   = /// ^ (
      ?: [-=]>             # function
       | [-+*/%<>&|^!?=]=  # compound assign / compare
       | >>>=?             # zero-fill right shift
       | ([-+:])\1         # doubles
       | ([&|<>*/%])\2=?   # logic / shift / power / floor division / modulo
       | \?(\.|::)         # soak access
       | \.{2,3}           # range or splat
    ) ///
    
    WHITESPACE = /^[^\n\S]+/
    
    COMMENT    = /^(\s*)###([^#][\s\S]*?)(?:###([^\n\S]*)|###$)|^((?:\s*#(?!##[^#]).*)+)/
    
    CODE       = /^[-=]>/
    
    MULTI_DENT = /^(?:\n[^\n\S]*)+/
    
    JSTOKEN      = ///^ `(?!``) ((?: [^`\\] | \\[\s\S]           )*) `   ///
    HERE_JSTOKEN = ///^ ```     ((?: [^`\\] | \\[\s\S] | `(?!``) )*) ``` ///
    
    
  • §

    字符串匹配正则表达式。

    STRING_START   = /^(?:'''|"""|'|")/
    
    STRING_SINGLE  = /// ^(?: [^\\']  | \\[\s\S]                      )* ///
    STRING_DOUBLE  = /// ^(?: [^\\"#] | \\[\s\S] |           \#(?!\{) )* ///
    HEREDOC_SINGLE = /// ^(?: [^\\']  | \\[\s\S] | '(?!'')            )* ///
    HEREDOC_DOUBLE = /// ^(?: [^\\"#] | \\[\s\S] | "(?!"") | \#(?!\{) )* ///
    
    INSIDE_JSX = /// ^(?:
        [^
          \{ # Start of CoffeeScript interpolation.
          <  # Maybe JSX tag (`<` not allowed even if bare).
        ]
      )* /// # Similar to `HEREDOC_DOUBLE` but there is no escaping.
    JSX_INTERPOLATION = /// ^(?:
          \{       # CoffeeScript interpolation.
        | <(?!/)   # JSX opening tag.
      )///
    
    HEREDOC_INDENT     = /\n+([^\n\S]*)(?=\S)/g
  • §

    正则表达式匹配正则表达式。

    REGEX = /// ^
      / (?!/) ((
      ?: [^ [ / \n \\ ]  # Every other thing.
       | \\[^\n]         # Anything but newlines escaped.
       | \[              # Character class.
           (?: \\[^\n] | [^ \] \n \\ ] )*
         \]
      )*) (/)?
    ///
    
    REGEX_FLAGS  = /^\w*/
    VALID_FLAGS  = /^(?!.*(.).*\1)[gimsuy]*$/
    
    HEREGEX      = /// ^
      (?:
    
  • §

    匹配任何字符,除了下面需要特殊处理的字符。

          [^\\/#\s]
  • §

    匹配 \ 后面的任何字符。

        | \\[\s\S]
  • §

    匹配任何 /,除了 ///。

        | /(?!//)
  • §

    匹配 #,它不是插值的一部分,例如 #{}。

        | \#(?!\{)
  • §

    注释会消耗所有内容,直到行尾,包括 ///。

        | \s+(?:#(?!\{).*)?
      )*
    ///
    
    HEREGEX_COMMENT = /(\s+)(#(?!{).*)/gm
    
    REGEX_ILLEGAL = /// ^ ( / | /{3}\s*) (\*) ///
    
    POSSIBLY_DIVISION   = /// ^ /=?\s ///
  • §

    其他正则表达式。

    HERECOMMENT_ILLEGAL = /\*\//
    
    LINE_CONTINUER      = /// ^ \s* (?: , | \??\.(?![.\d]) | \??:: ) ///
    
    STRING_INVALID_ESCAPE = ///
      ( (?:^|[^\\]) (?:\\\\)* )        # Make sure the escape isn’t escaped.
      \\ (
         ?: (0\d|[1-7])                # octal escape
          | (x(?![\da-fA-F]{2}).{0,2}) # hex escape
          | (u\{(?![\da-fA-F]{1,}\})[^}]*\}?) # unicode code point escape
          | (u(?!\{|[\da-fA-F]{4}).{0,4}) # unicode escape
      )
    ///
    REGEX_INVALID_ESCAPE = ///
      ( (?:^|[^\\]) (?:\\\\)* )        # Make sure the escape isn’t escaped.
      \\ (
         ?: (0\d)                      # octal escape
          | (x(?![\da-fA-F]{2}).{0,2}) # hex escape
          | (u\{(?![\da-fA-F]{1,}\})[^}]*\}?) # unicode code point escape
          | (u(?!\{|[\da-fA-F]{4}).{0,4}) # unicode escape
      )
    ///
    
    TRAILING_SPACES     = /\s+$/
  • §

    复合赋值运算符。

    COMPOUND_ASSIGN = [
      '-=', '+=', '/=', '*=', '%=', '||=', '&&=', '?=', '<<=', '>>=', '>>>='
      '&=', '^=', '|=', '**=', '//=', '%%='
    ]
  • §

    一元运算符。

    UNARY = ['NEW', 'TYPEOF', 'DELETE']
    
    UNARY_MATH = ['!', '~']
  • §

    位移运算符。

    SHIFT = ['<<', '>>', '>>>']
  • §

    比较运算符。

    COMPARE = ['==', '!=', '<', '>', '<=', '>=']
  • §

    数学运算符。

    MATH = ['*', '/', '%', '//', '%%']
  • §

    可以被 not 前缀取反的关系运算符。

    RELATION = ['IN', 'OF', 'INSTANCEOF']
  • §

    布尔运算符。

    BOOL = ['TRUE', 'FALSE']
  • §

    可以合法地被调用或索引的运算符。这些运算符后面跟着的左括号或左方括号将被记录为函数调用或索引操作的开始。

    CALLABLE  = ['IDENTIFIER', 'PROPERTY', ')', ']', '?', '@', 'THIS', 'SUPER', 'DYNAMIC_IMPORT']
    INDEXABLE = CALLABLE.concat [
      'NUMBER', 'INFINITY', 'NAN', 'STRING', 'STRING_END', 'REGEX', 'REGEX_END'
      'BOOL', 'NULL', 'UNDEFINED', '}', '::'
    ]
  • §

    可以作为小于比较运算符左侧的运算符,例如 a<b。

    COMPARABLE_LEFT_SIDE = ['IDENTIFIER', ')', ']', 'NUMBER']
  • §

    正则表达式永远不会紧跟在这些运算符后面的运算符(除了某些情况下带空格的 CALLABLE),但除法运算符可以。

    参见:http://www-archive.mozilla.org/js/language/js20-2002-04/rationale/syntax.html#regular-expressions

    NOT_REGEX = INDEXABLE.concat ['++', '--']
  • §

    这些运算符在紧接 WHEN 之前时,表示 WHEN 出现在行首。我们区分这些运算符和尾随的 WHEN,以避免语法中的歧义。

    LINE_BREAK = ['INDENT', 'OUTDENT', 'TERMINATOR']
  • §

    这些运算符前面的额外缩进将被忽略。

    INDENTABLE_CLOSERS = [')', '}', ']']