pyastgrep and custom linting

Posted in:

A while back I released pyastgrep, which is a rewrite of astpath. It’s a tool that allows you to search for specific Python syntax elements using XPath as a query language.

As part of the rewrite, I separated out the layers of code so that it can now be used as a library as well as a command line tool. I haven’t committed to very much API surface area for library usage, but there is enough.

My main personal use of this has been for linting tasks or enforcing of conventions that might be difficult to do otherwise. I don’t always use this – quite often I’d reach for custom Semgrep rules, and at other times I use introspection to enforce conventions. However, there are times when both of these fail or are rather difficult.

Examples

Some examples of the kinds of rules I’m thinking of include:

  • Boolean arguments to functions/methods should always be “keyword only”.

    Keyword-only arguments are a big win in many cases, and especially when it comes to boolean values. For example, forcing delete_thing(True, False) to be something like delete_thing(permanent=True, force=False) is an easy win, and this is common enough that applying this as a default policy across the code base will probably be a good idea.

    The pattern can be distinguished easily at syntax level. Good:

    def foo(*, my_bool_arg: bool):
        ...
    

    Bad:

    def foo(my_bool_arg: bool):
        ...
    
  • Simple coding conventions like “Don’t use single letter variables like i or j as a loop variables, use index or idx instead”.

    This can be found by looking for code like:

    for i, val in enumerate(...):
        ...
    

    You might not care about this, but if you do, you really want the rule to be applied as an automated test, not a nit-picky code review.

  • A Django-specific one: for inclusion tags, the tag names should match the template file name. This is nice for consistency and code navigation, plus I actually have some custom “jump to definition” code in my editor that relies on it for fast navigation.

    The pattern can again be seen quite easily at the syntax level. Good:

    @inclusion_tag("something/foo.html")
    def foo():
        ...
    

    Bad:

    @inclusion_tag("something/bar.html")
    def foo():
        ...
    
  • Any ’task’ (something decorated with @task) should be named foo_task in order to give a clue that it works as an asynchronous call, and its return value is just a promise object.

There are many more examples you’ll come up with once you start thinking like this.

Method

Having identified the bad patterns we want to find and fix, my method for doing so looks as follows. It contains a number of tips and refinements I’ve made over the past few years.

First, I open a test file, e.g. tests/test_conventions.py, and start by inserting some example code – at least one bad example (the kind we are trying to fix), and one good example.

There are a few reasons for this:

  • First, I need to make sure I can prove life exists on earth, as John D. Cook puts it. I’ll say more about this later on.

  • Second, it gives me a deliberately simplified bit of code that I can pass to pyastdump.

  • Third, it provides some explanation for the test you are going to write, and a potentially rather hairy XPath expression.

I’ll use my first example above, keyword-only boolean args. I start by inserting the following text into my test file:

def bad_boolean_arg(foo: bool):
    pass


def good_boolean_arg(*, foo: bool):
    pass

Then, I copy both of these in turn to the clipboard (or both together if there isn’t much code, like in this case), and pass them through pyastdump. From a terminal, I do:

$ xsel | pyastdump -

I’m using the xsel Linux utility, you can also use xclip -out, or pbpaste on MacOS, or Get-Clipboard in Powershell.

This gives me some AST to look at, structured as XML:

<Module>
  <body>
    <FunctionDef lineno="1" col_offset="0" type="str" name="bad_boolean_arg">
      <args>
        <arguments>
          <posonlyargs/>
          <args>
            <arg lineno="1" col_offset="20" type="str" arg="foo">
              <annotation>
                <Name lineno="1" col_offset="25" type="str" id="bool">
                  <ctx>
                    <Load/>
                  </ctx>
                </Name>
              </annotation>
            </arg>
          </args>
          <kwonlyargs/>
          <kw_defaults/>
          <defaults/>
        </arguments>
      </args>
      <body>
        <Pass lineno="2" col_offset="4"/>
      </body>
      <decorator_list/>
    </FunctionDef>
  </body>
  <type_ignores/>
</Module>

In this case, the current structure of Python’s AST has helped us out a lot – it has separated out posonlyargs (positional only arguments), args (positional or keyword), and kwonlyargs (keyword only args). We can see the offending annotation containing a Name with id="bool" inside the args, when we want it only to be allowed as a keyword-only argument.

(Do we want to disallow boolean-annotated arguments as positional only? I’m leaning towards “no” here, as positional only is quite rare and usually a very deliberate choice).

I now have to construct an XPath expression that will find the offending XML nodes, but not match good examples. It’s pretty straightforward in this case, once you know the basics of XPath. I test it out straight away at the CLI:

pyastgrep './/FunctionDef/args/arguments/args/arg/annotation/Name[@id="bool"]' tests/test_conventions.py

If I’ve done it correctly, it should print my bad example, and not my good example.

Then I widen the net, omitting tests/test_conventions.py to search everywhere in my current directory.

At this point, I’ve probably got some real results that I want to address, but I might also notice there are other variants of the same thing I need to be able to match, and so I iterate, adding more bad/good examples as necessary.

Now I need to write a test. It’s going to look like this:

def test_boolean_arguments_are_keyword_only():
    assert_expected_pyastgrep_matches(
        """
        .//FunctionDef/args/arguments/args/arg/annotation/Name[@id="bool"]
        """,
        message="Function arguments with type `bool` should be keyword-only",
        expected_count=1,
    )

Of course, the real work is being done inside my assert_expected_pyastgrep_matches utility, which looks like this:

from pathlib import Path
from boltons import iterutils
from pyastgrep.api import Match, search_python_files

SRC_ROOT = Path(__file__).parent.parent.resolve()  # depends on project structure


def assert_expected_pyastgrep_matches(xpath_expr: str, *, expected_count: int, message: str):
    """
    Asserts that the pyastgrep XPath expression matches only `expected_count` times,
    each of which must be marked with `pyastgrep_exception`

    `message` is a message to be printed on failure.

    """
    xpath_expr = xpath_expr.strip()
    matches: list[Match] = [item for item in search_python_files([SRC_ROOT], xpath_expr) if isinstance(item, Match)]

    expected_matches, other_matches = iterutils.partition(
        matches, key=lambda match: "pyastgrep: expected" in match.matching_line
    )

    if len(expected_matches) < expected_count:
        assert False, f"Expected {expected_count} matches but found {len(expected_matches)} for {xpath_expr}"

    assert not other_matches, (
        message
        + "\n Failing examples:\n"
        + "\n".join(
            f"  {match.path}:{match.position.lineno}:{match.position.col_offset}:{match.matching_line}"
            for match in other_matches
        )
    )

There is a bit of explaining to do now.

Being sure that you can “find life on earth” is especially important for a negative test like this. It would be very easy to have an XPath query that you thought worked but didn’t, as it might just silently return zero results. In addition, Python’s AST is not stable – so a query that works now might stop working in the future.

It’s like you have a machine that claims to be able to find needles in haystacks – when it comes back and says “no needles found”, do you believe it? To increase your confidence that everything works and continues to work, you place a few needles at locations that you know, then check that the machine is able to find those needles. When it claims “found exactly 2 needles”, and you can account for those, you’ve got much more confidence that it has indeed found the only needles.

So, it’s important to leave my bad examples in there.

But, I obviously don’t want the bad examples to cause the test to fail! In addition, I want a mechanism for exceptions. A simple mechanism I’ve chosen is to add the text pyastgrep: expected as a comment.

So, I need to change my bad example like this:

def bad_boolean_arg(foo: bool):  # pyastgrep: expected
    pass

I also pass expected_count=1 to indicate that I expect to find at least one bad example (or more, if I’ve added more bad examples).

Hopefully that explains everything assert_expected_pyastgrep_matches does. A couple more notes:

  • it uses boltons, a pretty useful set of Python utilities

  • it requires a SRC_ROOT folder to be defined, which will depend on your project, and might be different depending on which folder(s) you want to apply the convention too.

Now, everything is set up, and I run the test for real, hopefully locating all the bad usages. I work through them and fix, then leave the test in.

Tips

  • pyastgrep works strictly at the syntax level, so unlike Semgrep you might get caught out by aliases if you try match on specific names:

    from foo import bar
    from foo import bar as foo_bar
    import foo
    
    # These all call the same function but look different in AST:
    foo.bar()
    bar()
    foo_bar()
    
  • There is however, an advantage to this – you don’t need a real import to construct your bad examples, you can just use a Mock. e.g. for my inclusion_tag example above, I have code like:

    from unittest.mock import Mock
    
    register = Mock()
    
    @register.inclusion_tag(filename="something/not_bad_tag.html")
    def bad_tag():  # pyastgrep: expected
        pass
    

    You can see the full code on GitHub.

  • You might be able to use a mixture of techniques:

    • A Semgrep rule avoids one set of bad patterns using some thirdparty.func, and requiring everyone to use your own wrapper, which is then constructed in such a way to make it easier to apply a pyastgrep rule

    • Some introspection that produces a list of classes or functions to which some rule applies, then dynamically generates XPath expression to pass to pyastgrep.

Conclusion

Syntax level searching isn’t right for every job, but it can be a powerful addition to your toolkit, and with a decent query language like XPath, you can do a surprising amount. Have a look at the pyastgrep examples for inspiration!

Comments §

Comments should load when you scroll to here...