1
    2
    3
    4
    5
    6
    7
    8
    9
   10
   11
   12
   13
   14
   15
   16
   17
   18
   19
   20
   21
   22
   23
   24
   25
   26
   27
   28
   29
   30
   31
   32
   33
   34
   35
   36
   37
   38
   39
   40
   41
   42
   43
   44
   45
   46
   47
   48
   49
   50
   51
   52
   53
   54
   55
   56
   57
   58
   59
   60
   61
   62
   63
   64
   65
   66
   67
   68
   69
   70
   71
   72
   73
   74
   75
   76
   77
   78
   79
   80
   81
   82
   83
   84
   85
   86
   87
   88
   89
   90
   91
   92
   93
   94
   95
   96
   97

base / tracing / test / README.md [blame]

# PerfettoSQL Chrome Standard Library tests

This directory contains the [Perfetto Diff Tests](https://perfetto.dev/docs/analysis/trace-processor#diff-tests) to test changes to the Chrome standard library.

The diff tests themselves are in `./trace_processor/diff_tests/chrome`. The `./data` directory contains the Perfetto traces that are used by the diff tests. As well as testing the functionality of your metric, the diff tests help to ensure that the stdlib remains backwards compatible with existing traces recorded from older Chrome versions.

## Running Diff Tests

Currently, the diff tests only run on Linux. You can build and run the diff tests with the following.

```
$ gn gen --args='' out/Linux
$ gclient sync
$ autoninja -C out/Linux perfetto_diff_tests
$ out/Linux/bin/run_perfetto_diff_tests
```

To run specific diff tests you can specify the `--name-filter` flag on the `run_perfetto_diff_tests` script with regex to filter which tests you want to run.

## Adding a New Diff Test

Your new diff test should go in `base/tracing/test/trace_processor/diff_tests/chrome`. You can either add to an existing TestSuite in one of the files or add a new test in a new file.

If you are adding a **new TestSuite**, be sure to add it to `include_index.py` so the runner knows to run this new TestSuite.

### Adding New Test Data

If your test requires modifying or adding new test data i.e. a new trace in `base/tracing/test/data`, you will need to perform the following steps:

**1**. Upload the file to the GCS bucket:
```
   $ base/tracing/test/test_data.py upload <path_to_file>
```
**2**. Add the deps entry produced by the above script to the [DEPS file](../../../DEPS) (see examples in the `src/base/tracing/test/data` entry).
```
{
  "path": {
    "dep_type": "gcs",
    "bucket": "perfetto",
    "objects": [
      {
        "object_name": "test_data/file_name-a1b2c3f4",
        "sha256sum": "a1b2c3f4",
        "size_bytes": 12345,
        "generation": 1234567
      }
    ]
  }
}
```
You will need to **manually** add this to the deps entry. After adding this entry, running `gclient sync` will download the test files in your local repo. See these [docs](https://chromium.googlesource.com/chromium/src/+/HEAD/docs/gcs_dependencies.md) for the GCS dependency workflow.

**Note:** you can get the DEPS entry separately from the upload step by calling `base/tracing/test/test_data.py get_deps <file_name>` or `base/tracing/test/test_data.py get_all_deps`.

**3**. Check in the .sha256 files produced by the `test_data.py upload` command (`file_name-a1b2c3f4.sha256` in `base/tracing/test/data`). These files will be rolled to Perfetto and used to download the GCS objects by Perfetto's own [test_data](../../../third_party/perfetto/tools/test_data) script.

## Writing TestTraceProcessor Tests

See [test_trace_processor_example_unittests.cc](../../test/test_trace_processor_example_unittest.cc) for examples you can compile and run.

You can write unit or browser tests with the TestTraceProcessor to record a trace, run a query on it and write expectations against the result.

Instructions:

1. For a unittest, you need to add a `base::test::TracingEnvironment` as a member in your test class to handle the setup and teardown between tests. You also need a `base::test::TaskEnvironment` which is needed for starting/stopping tracing. Full browser tests don't need this, they handle tracing setup as a part of browser initialization.

2. Record a trace:
```
TestTraceProcessor test_trace_processor;
test_trace_processor.StartTrace(/* category_filter_string */);

/* do stuff */

absl::Status status = test_trace_processor.StopAndParseTrace();
ASSERT_TRUE(status.ok()) << status.message();
```

3. Run your query:
```
auto result = test_trace_processor.RunQuery(/* your query */);
ASSERT_TRUE(result.has_value()) << result.message();
```

4. Write expectations against the output:
```
EXPECT_THAT(result.value(), /* your expectations */);
```

The output format is a 2D vector of strings `std::vector<std::vector<std::string>>` where each vector is an SQLite row you would see when querying from the Perfetto UI. The first row will contain the header names for the columns.

#### Best Practices

* Use `ORDER BY` in queries so that the results are deterministic.

* Note that the some data is not stable over long time, in particular ids generated by trace processor, which can change for the same trace is the trace processor under-the-hood parsing logic changes. Slice ids, utids and upids are the most common examples of this.

* In general, it's recommended for tests to focus on the relationships between events, e.g. checking that you find the correct event when filtering by specific id and that its name is as expected, rather than checking specific id values.