Automating Runs

Profiles can be generated either manually on your own (either by individual profilers or using the perun collect and perun postprocess commands), or you can use Perun’s runner infrastructure to partially automate the generation process. Perun is capable either to run the jobs through the stored configuration which is meant for a regular project profiling (either in local or shared configuration c.f. Perun Configuration files) or through a single job specifications meant for irregular or specific profiling jobs.

Each profile generated by specified batch jobs will be stored in .perun/jobs directory with the following name of the template:

command-collector-workload-Y-m-d-H-M-S.perf

Where command corresponds to the name of the application (or script), for which we collected the data using collector on workload at given specified date. You can change the template for profile name generation by setting format.output_profile_template. New profiles are annotated with the origin set to the current HEAD of the wrapped repository. origin serves as a check during registering profiles in the indexes of minor versions. Profile with origin different from the target minor version will not be assigned, as it would violate the correctness of the performance history of the project. If you want to automatically register the newly generated profile into the corresponding minor version index, then set profiles.register_after_run key to a true value.

_images/perun-jobs-flow.svg

The figure above show the overview of the jobs flow in Perun. The runner module is initialized form user interfaces and from local (or shared) configurations and internally generates the matrix of jobs which are run in the sequence. Each job is then finished with storing the generated profile in the internal storage.

Note

In order to obtain fine result, it is advised to run the benchmark several times (at least three times) and either do the average over all runs or discard the first runs. This is because, initial benchmarks usually have skewed times.

Note

If you do not want to miss profiling, e.g. after each push, commit, etc., check out git hooks. git hooks allows you to run custom scripts on certain git event triggers.

Runner CLI

Command Line Interface contains group of two commands for managing the jobs—perun run job for running one specified batch of jobs (usually corresponding to irregular measuring or profilings) and perun run matrix for running the pre-configured matrix in Yaml format specifing the batch job (see Job Matrix Format for full specification). Running the jobs by perun run matrix corresponds to regular measuring and profiling, e.g. during end of release cycles, before push to origin/upstream or even after each commit.

perun run job

Run specified batch of perun jobs to generate profiles.

This command correspond to running one isolated batch of profiling jobs, outside of regular profiling. Run perun run matrix, after specifying job matrix in local configuration to automate regular profiling of your project. After the batch is generated, each profile is tagged with origin set to current HEAD. This serves as check to not assign such profiles to different minor versions.

By default, the profiles computed by this batch job are stored inside the .perun/jobs/ directory as a files in form of:

bin-collector-workload-timestamp.perf

In order to store generated profiles run the following, with i@p corresponding to pending tag, which can be obtained by running perun status:

perun add i@p
perun run job -c time -b ./mybin -w file.in -w file2.in -p normalizer

This command profiles two commands ./mybin file.in and ./mybin file2.in and collects the profiling data using the Time Collector. The profiles are then normalized with the Normalizer Postprocessor.

perun run job -c complexity -b ./mybin -w sll.cpp -cp complexity targetdir=./src

This commands runs one job ‘./mybin sll.cpp’ using the Trace Collector, which uses custom binaries targeted at ./src directory.

perun run job -c mcollect -b ./mybin -b ./otherbin -w input.txt -p normalizer -p clusterizer

This commands runs two jobs ./mybin input.txt and ./otherbin input.txt and collects the profiles using the Memory Collector. The profiles are then postprocessed, first using the Normalizer Postprocessor and then with Regression Analysis.

Refer to Automating Runs and Perun’s Profile Format for more details about automation and lifetimes of profiles. For list of available collectors and postprocessors refer to Supported Collectors and Supported Postprocessors respectively.

perun run job [OPTIONS]

Options

-b, --cmd <cmd>

Required Command that is being profiled. Either corresponds to some script, binary or command, e.g. ./mybin or perun.

-a, --args <args>

Additional parameters for <cmd>. E.g. status or -al is command parameter.

-w, --workload <workload>

Inputs for <cmd>. E.g. ./subdir is possible workloadfor ls command.

-c, --collector <collector>

Required Profiler used for collection of profiling data for the given <cmd>

Options:

trace | memory | time | complexity | bounds

-cp, --collector-params <collector_params>

Additional parameters for the <collector> read from the file in YAML format

-p, --postprocessor <postprocessor>

After each collection of data will run <postprocessor> to postprocess the collected resources.

Options:

clusterizer | normalizer | regression-analysis | regressogram | moving-average | kernel-regression

-pp, --postprocessor-params <postprocessor_params>

Additional parameters for the <postprocessor> read from the file in YAML format

perun run matrix

Runs the jobs matrix specified in the local.yml configuration.

This commands loads the jobs configuration from local configuration, builds the job matrix and subsequently runs the jobs collecting list of profiles. Each profile is then stored in .perun/jobs directory and moreover is annotated using by setting origin key to current HEAD. This serves as check to not assing such profiles to different minor versions.

The job matrix is defined in the yaml format and consists of specification of binaries with corresponding arguments, workloads, supported collectors of profiling data and postprocessors that alter the collected profiles.

Refer to Automating Runs and Job Matrix Format for more details how to specify the job matrix inside local configuration and to Perun Configuration files how to work with Perun’s configuration files.

perun run matrix [OPTIONS]

Options

-q, --without-vcs-history

Will not print the VCS history tree during the collection of the data.

Overview of Jobs

Usually during the profiling of application, we first collect the data by the means of profiler (or profiling data collector or whatever terminology we are using) and we can further augment the collected data by ordered list of postprocessing phases (e.g. for filtering out unwanted data, normalizing or scaling the amounts, etc.). As results we generate one profile for each application configuration and each profiling job. Thus, we can consider one profiling jobs as collection of profiling data from application of one certain configuration using one collector and ordered set of postprocessors.

One configuration of application can be partitioned into three parts (two being optional):

  1. The actual command that is being profiled, i.e. either the binary or wrapper script that is executed as one command from the terminal and ends with success or failure. An example of command can be e.g. the perun itself, ls or ./my_binary.

  2. Set of arguments for command (optional), i.e. set of parameters or arguments, that are supplied to the profiled command. The intuition behind arguments is to allow setting various optimization levels or profile different configurations of one program. An example of argument (or parameter) can be e.g. log, -al or -O2 -v.

  3. Input workloads (optional), i.e. different inputs for profiled command. While workloads can be considered as arguments, separating them allows more finer specification of jobs, e.g. when we want to profile our program on workloads with different sizes under different configurations (since degradations usually manifest under bigger workloads). An example of workload can be e.g. HEAD or /dir/subdir or << "Hello world".

So from the user specification, commands, arguments and workloads can be combined using cartesian product which yields the list of full application configurations. Then for each such configuration (like e.g. perun log HEAD, ls -al /dir/subdir or ./my_binary -O2 -v << "Hello world") we run specified collectors and finally the list of postprocessors. This process is automatic either using the perun run job or perun run matrix, which differ in the way how the user specification is obtained.

Each collector (resp. postprocessor) runs in up to three phases (with pre and post phases being optional). First the function before() is executed (if implemented by given collector or postprocessor), where the collector (resp. postprocessor) can execute additional preparation before the actual collection (resp. postprocessing) of the data, like e.g. compiling custom binaries. Then the actual collect() (resp. postprocess()) is executed, which runs the given job with specified collection (resp. postprocessing) unit and generatesj profile (potentially in raw or intermediate format). Finally the after() phase is run, which can further postprocess the generated profile (after the success of collection), e.g. by required filtering of data or by transforming raw profiles to Perun’s Profile Format. See (Collectors Overview and Postprocessors Overview for more detailed description of units). During these phases kwargs are passed through and share the specification, or can be used for passing additional information to following phases. The resulting kwargs has to contain the profile key, which contains the profile w.r.t. Specification of Profile Format.

The overall process can described by the following pseudocode:

for (cmd, argument, workload) in jobs:
   for collector in collectors:
      collector.before(cmd, argument, workload)
      collector.collect(cmd, argument, workload)
      profile = collector.after()
      for postprocessor in postprocessors:
         postprocessor.before(profile)
         postprocessor.postprocess(profile)
         profile = postprocessor.after(profile)

Note that each phase should return the following tripple: (status code, status message, kwargs). The status code is used for checking the success of the called phases and in case of error prints the status message.

Before this overall process, one can run a custom set of commands by stating the key execute.pre_run key. This is mostly meant for compiling of new version or preparing other necessary requirements before actual collection.

_images/lifetime-of-profile.svg

For specification and details about collectors, postprocessors and internal storage of Perun refer to Collectors Overview, Postprocessors Overview and Perun Internals.

Job Matrix Format

In order to maximize the automation of running jobs you can specify in Perun config the specification of commands, arguments, workloads, collectors and postprocessors (and their internal configurations) as specified in the Overview of Jobs. Job matrixes are meant for a regular profiling jobs and should reduce the profiling to a single perun run matrix command. Both the config and the specification of job matrix is based on Yaml format.

Full example of one job matrix is as follows:

cmds:
   - perun

args:
   - log
   - log --short

workloads:
   - HEAD
   - HEAD~1

collectors:
   - name: time

postprocessors:
   - name: normalizer
   - name: regression_analysis
     params:
      - method: full
      - steps: 10

Given matrix will create four jobs (perun log HEAD, perun log HEAD~1, perun log --short HEAD and perun log --short HEAD~1) which will be issued for runs. Each job will be collected by Time Collector and then postprocessed first by Normalizer Postprocessor and then by Regression Analysis with specification {'method': 'full', 'steps': 10}.

Run the following to configure the job matrix of the current project:

perun config --edit

This will open the local configuration in editor specified by general.editor and lets you specify configuration for your application and set of collectors and postprocessors. Unless the source configuration file was not modified, it should contain a helper comments. The following keys can be set in the configuration:

cmds

List of names of commands which will be profiled by set of collectors. The commands should preferably not contain any parameters or workloads, since they can be set by different configuration resulting into finer specification of configuration.

cmds:
   - perun
   - ls
   - ./myclientbinary
   - ./myserverbinary
args

List of arguments (or parameters) which are supplied to profiled commands. It is advised to differentiate between arguments/parameters and workloads. While their semantics may seem close, separation of this concern results into more verbose performance history

args:
   - log
   - log --short
   - -al
   - -q -O2
workloads

List of workloads which are supplied to profiled commands. Workloads represents program inputs and supplied files.

workloads:
   - HEAD
   - HEAD~1
   - /usr/share
   - << "Hello world!"

From version 15.1. you can use the workload generators instead. See List of Supported Workload Generators for more information about supported workload generators and generators.workload for more information how to specify the workload generators in the configuration files.

collectors

List of collectors used to collect data for the given configuration of application represented by commands, arguments and workloads. Each collector is specified by its name and additional params which corresponds to the dictionary of (key, value) parameters. Note that the same collector can be specified more than once (for cases, when one needs different collector configurations). For list of supported collectors refer to Supported Collectors.

collectors:
   - name: memory
     params:
         - sampling: 1
   - name: time
postprocessors

List of postprocessors which are used after the successful collection of the profiling data. Each postprocessor is specified by its name and additional params which corresponds to the dictionary of (key, value) parameters. Note that the same postprocessor can be specified more than just once. For list of supported postprocessors refer to Supported Postprocessors.

postprocessors:
   - name: normalizer
   - name: regression_analysis
     params:
      - method: full
      - steps: 10

List of Supported Workload Generators

From version 0.15.1, Perun supports the specification of workload generators, instead of raw workload values specified in workloads. These generators continuously generates workloads and internally Perun either merges the resources into one single profile or gradually generates profile for each workload.

The generators are specified by generators.workload section. These specifications are collected through all of the configurations in the hierarchy.

You can use some basic generators specified in shared configurations called basic_strings (which generates strings of lengths from interval (8, 128) with increment of 8), basic_integers (which generates integers from interval (100, 10000), with increment of 200) or basic_files (which generates text files with number of lines from interval (10, 10000), with increment of 1000).

Generic settings

All generators can be configured using the following generic settings:

  • profile_for_each_workload: by default this option is set to false, and then when one uses the generator to generate the workload, the collected resources will be merged into one single profile. If otherwise this option is set to true value (true, 1, yes, etc.) then Perun will generate profile for each of the generated workload.

Singleton Generator

Singleton Generator generates only one single value. This generator corresponds to the default behaviour of Perun, i.e. when each specified workload in workloads was passed to profiled program as string.

Currently be default, any string specified in workloads, that does not correspond to some generator specified in generators.workload, is converted to Singleton Generator.

The Singleton Generator can be configured by following options:

  • value: singleton value that is passed as workload.

Integer Generator

Integer Generator generates the range of the integers.

The Integer Generator starts from the min_range workload, and continuously increments this value by step (by default equal to 1) until it reaches max_range (including).

The following shows the example of integer generator, which continuously generates workloads 10, 20, …, 90, 100:

generators:
  workload:
    - id: integer_generator
      type: integer
      min_range: 10
      max_range: 100
      step: 10

The Integer Generator can be configured by following options:

  • min_range: the minimal integer value that shall be generated.

  • max_range: the maximal integer value that shall be generated.

  • step: the step (or increment) of the range.

String Generator

String Generator generates strings of changing length.

The String Generators starts generating random strings starting from the min_len, and continuously increments this length by step_len (by default equal to 1), until it reaches the max_len (including).

The following shows the example of integer generator, which continuously generates workload strings of length 1, 2, …, 9, 10:

generators:
  workload:
    - id: string_generator
      type: string
      min_len: 1
      max_len: 10
      step_len: 1

The String Generator can be configured by following options:

  • min_len: the minimal length of the string that shall be generated.

  • max_len: the maximal length of the string that shall be generated.

  • step_len: the step (or increment) of the lengths.

Text File Generator

Text File Generator generates the range of random files.

The TextFile Generator generates files with random contents (lorem ipsum) starting from min_lines, and continuously increments this value by step until the number of lines in the generated file reaches max_lines. Each row will then either have maximal length of max_chars (if randomize_rows is set to false value), otherwise the length is randomized from the interval (min_lines, max_lines).

The following shows the example of integer generator, which continuously generates workloads 10, 20, …, 90, 100:

generators:
  workload:
    - id: textfile_generator
      type: textfile
      min_lines: 10
      max_lines: 100
      step: 10

The TextFile Generator can be configured by following options:

  • min_lines: the minimal number of lines in the file that shall be generated.

  • max_lines: the maximal number of lines in the file that shall be generated.

  • step: the step (or increment) of the range. By default set to 1.

  • min_chars: the minimal number of characters on one line. By default set to 5.

  • max_chars: the maximal number of characters on one line. By default set to 80.

  • randomize_rows: by default set to True, the rows in the file have then randomized length from interval (min_chars, max_chars). Otherwise (if set to false), the lines will always be of maximal length (max_chars).