1# Contributing to Zstandard 2We want to make contributing to this project as easy and transparent as 3possible. 4 5## Our Development Process 6New versions are being developed in the "dev" branch, 7or in their own feature branch. 8When they are deemed ready for a release, they are merged into "master". 9 10As a consequences, all contributions must stage first through "dev" 11or their own feature branch. 12 13## Pull Requests 14We actively welcome your pull requests. 15 161. Fork the repo and create your branch from `dev`. 172. If you've added code that should be tested, add tests. 183. If you've changed APIs, update the documentation. 194. Ensure the test suite passes. 205. Make sure your code lints. 216. If you haven't already, complete the Contributor License Agreement ("CLA"). 22 23## Contributor License Agreement ("CLA") 24In order to accept your pull request, we need you to submit a CLA. You only need 25to do this once to work on any of Facebook's open source projects. 26 27Complete your CLA here: <https://code.facebook.com/cla> 28 29## Workflow 30Zstd uses a branch-based workflow for making changes to the codebase. Typically, zstd 31will use a new branch per sizable topic. For smaller changes, it is okay to lump multiple 32related changes into a branch. 33 34Our contribution process works in three main stages: 351. Local development 36 * Update: 37 * Checkout your fork of zstd if you have not already 38 ``` 39 git checkout https://github.com/<username>/zstd 40 cd zstd 41 ``` 42 * Update your local dev branch 43 ``` 44 git pull https://github.com/facebook/zstd dev 45 git push origin dev 46 ``` 47 * Topic and development: 48 * Make a new branch on your fork about the topic you're developing for 49 ``` 50 # branch names should be consise but sufficiently informative 51 git checkout -b <branch-name> 52 git push origin <branch-name> 53 ``` 54 * Make commits and push 55 ``` 56 # make some changes = 57 git add -u && git commit -m <message> 58 git push origin <branch-name> 59 ``` 60 * Note: run local tests to ensure that your changes didn't break existing functionality 61 * Quick check 62 ``` 63 make shortest 64 ``` 65 * Longer check 66 ``` 67 make test 68 ``` 692. Code Review and CI tests 70 * Ensure CI tests pass: 71 * Before sharing anything to the community, make sure that all CI tests pass on your local fork. 72 See our section on setting up your CI environment for more information on how to do this. 73 * Ensure that static analysis passes on your development machine. See the Static Analysis section 74 below to see how to do this. 75 * Create a pull request: 76 * When you are ready to share you changes to the community, create a pull request from your branch 77 to facebook:dev. You can do this very easily by clicking 'Create Pull Request' on your fork's home 78 page. 79 * From there, select the branch where you made changes as your source branch and facebook:dev 80 as the destination. 81 * Examine the diff presented between the two branches to make sure there is nothing unexpected. 82 * Write a good pull request description: 83 * While there is no strict template that our contributors follow, we would like them to 84 sufficiently summarize and motivate the changes they are proposing. We recommend all pull requests, 85 at least indirectly, address the following points. 86 * Is this pull request important and why? 87 * Is it addressing an issue? If so, what issue? (provide links for convenience please) 88 * Is this a new feature? If so, why is it useful and/or necessary? 89 * Are there background references and documents that reviewers should be aware of to properly assess this change? 90 * Note: make sure to point out any design and architectural decisions that you made and the rationale behind them. 91 * Note: if you have been working with a specific user and would like them to review your work, make sure you mention them using (@<username>) 92 * Submit the pull request and iterate with feedback. 933. Merge and Release 94 * Getting approval: 95 * You will have to iterate on your changes with feedback from other collaborators to reach a point 96 where your pull request can be safely merged. 97 * To avoid too many comments on style and convention, make sure that you have a 98 look at our style section below before creating a pull request. 99 * Eventually, someone from the zstd team will approve your pull request and not long after merge it into 100 the dev branch. 101 * Housekeeping: 102 * Most PRs are linked with one or more Github issues. If this is the case for your PR, make sure 103 the corresponding issue is mentioned. If your change 'fixes' or completely addresses the 104 issue at hand, then please indicate this by requesting that an issue be closed by commenting. 105 * Just because your changes have been merged does not mean the topic or larger issue is complete. Remember 106 that the change must make it to an official zstd release for it to be meaningful. We recommend 107 that contributers track the activity on their pull request and corresponding issue(s) page(s) until 108 their change makes it to the next release of zstd. Users will often discover bugs in your code or 109 suggest ways to refine and improve your initial changes even after the pull request is merged. 110 111## Static Analysis 112Static analysis is a process for examining the correctness or validity of a program without actually 113executing it. It usually helps us find many simple bugs. Zstd uses clang's `scan-build` tool for 114static analysis. You can install it by following the instructions for your OS on https://clang-analyzer.llvm.org/scan-build. 115 116Once installed, you can ensure that our static analysis tests pass on your local development machine 117by running: 118``` 119make staticAnalyze 120``` 121 122In general, you can use `scan-build` to static analyze any build script. For example, to static analyze 123just `contrib/largeNbDicts` and nothing else, you can run: 124 125``` 126scan-build make -C contrib/largeNbDicts largeNbDicts 127``` 128 129## Performance 130Performance is extremely important for zstd and we only merge pull requests whose performance 131landscape and corresponding trade-offs have been adequately analyzed, reproduced, and presented. 132This high bar for performance means that every PR which has the potential to 133impact performance takes a very long time for us to properly review. That being said, we 134always welcome contributions to improve performance (or worsen performance for the trade-off of 135something else). Please keep the following in mind before submitting a performance related PR: 136 1371. Zstd isn't as old as gzip but it has been around for time now and its evolution is 138very well documented via past Github issues and pull requests. It may be the case that your 139particular performance optimization has already been considered in the past. Please take some 140time to search through old issues and pull requests using keywords specific to your 141would-be PR. Of course, just because a topic has already been discussed (and perhaps rejected 142on some grounds) in the past, doesn't mean it isn't worth bringing up again. But even in that case, 143it will be helpful for you to have context from that topic's history before contributing. 1442. The distinction between noise and actual performance gains can unfortunately be very subtle 145especially when microbenchmarking extremely small wins or losses. The only remedy to getting 146something subtle merged is extensive benchmarking. You will be doing us a great favor if you 147take the time to run extensive, long-duration, and potentially cross-(os, platform, process, etc) 148benchmarks on your end before submitting a PR. Of course, you will not be able to benchmark 149your changes on every single processor and os out there (and neither will we) but do that best 150you can:) We've adding some things to think about when benchmarking below in the Benchmarking 151Performance section which might be helpful for you. 1523. Optimizing performance for a certain OS, processor vendor, compiler, or network system is a perfectly 153legitimate thing to do as long as it does not harm the overall performance health of Zstd. 154This is a hard balance to strike but please keep in mind other aspects of Zstd when 155submitting changes that are clang-specific, windows-specific, etc. 156 157## Benchmarking Performance 158Performance microbenchmarking is a tricky subject but also essential for Zstd. We value empirical 159testing over theoretical speculation. This guide it not perfect but for most scenarios, it 160is a good place to start. 161 162### Stability 163Unfortunately, the most important aspect in being able to benchmark reliably is to have a stable 164benchmarking machine. A virtual machine, a machine with shared resources, or your laptop 165will typically not be stable enough to obtain reliable benchmark results. If you can get your 166hands on a desktop, this is usually a better scenario. 167 168Of course, benchmarking can be done on non-hyper-stable machines as well. You will just have to 169do a little more work to ensure that you are in fact measuring the changes you've made not and 170noise. Here are some things you can do to make your benchmarks more stable: 171 1721. The most simple thing you can do to drastically improve the stability of your benchmark is 173to run it multiple times and then aggregate the results of those runs. As a general rule of 174thumb, the smaller the change you are trying to measure, the more samples of benchmark runs 175you will have to aggregate over to get reliable results. Here are some additional things to keep in 176mind when running multiple trials: 177 * How you aggregate your samples are important. You might be tempted to use the mean of your 178 results. While this is certainly going to be a more stable number than a raw single sample 179 benchmark number, you might have more luck by taking the median. The mean is not robust to 180 outliers whereas the median is. Better still, you could simply take the fastest speed your 181 benchmark achieved on each run since that is likely the fastest your process will be 182 capable of running your code. In our experience, this (aggregating by just taking the sample 183 with the fastest running time) has been the most stable approach. 184 * The more samples you have, the more stable your benchmarks should be. You can verify 185 your improved stability by looking at the size of your confidence intervals as you 186 increase your sample count. These should get smaller and smaller. Eventually hopefully 187 smaller than the performance win you are expecting. 188 * Most processors will take some time to get `hot` when running anything. The observations 189 you collect during that time period will very different from the true performance number. Having 190 a very large number of sample will help alleviate this problem slightly but you can also 191 address is directly by simply not including the first `n` iterations of your benchmark in 192 your aggregations. You can determine `n` by simply looking at the results from each iteration 193 and then hand picking a good threshold after which the variance in results seems to stabilize. 1942. You cannot really get reliable benchmarks if your host machine is simultaneously running 195another cpu/memory-intensive application in the background. If you are running benchmarks on your 196personal laptop for instance, you should close all applications (including your code editor and 197browser) before running your benchmarks. You might also have invisible background applications 198running. You can see what these are by looking at either Activity Monitor on Mac or Task Manager 199on Windows. You will get more stable benchmark results of you end those processes as well. 200 * If you have multiple cores, you can even run your benchmark on a reserved core to prevent 201 pollution from other OS and user processes. There are a number of ways to do this depending 202 on your OS: 203 * On linux boxes, you have use https://github.com/lpechacek/cpuset. 204 * On Windows, you can "Set Processor Affinity" using https://www.thewindowsclub.com/processor-affinity-windows 205 * On Mac, you can try to use their dedicated affinity API https://developer.apple.com/library/archive/releasenotes/Performance/RN-AffinityAPI/#//apple_ref/doc/uid/TP40006635-CH1-DontLinkElementID_2 2063. To benchmark, you will likely end up writing a separate c/c++ program that will link libzstd. 207Dynamically linking your library will introduce some added variation (not a large amount but 208definitely some). Statically linking libzstd will be more stable. Static libraries should 209be enabled by default when building zstd. 2104. Use a profiler with a good high resolution timer. See the section below on profiling for 211details on this. 2125. Disable frequency scaling, turbo boost and address space randomization (this will vary by OS) 2136. Try to avoid storage. On some systems you can use tmpfs. Putting the program, inputs and outputs on 214tmpfs avoids touching a real storage system, which can have a pretty big variability. 215 216Also check our LLVM's guide on benchmarking here: https://llvm.org/docs/Benchmarking.html 217 218### Zstd benchmark 219The fastest signal you can get regarding your performance changes is via the in-build zstd cli 220bench option. You can run Zstd as you typically would for your scenario using some set of options 221and then additionally also specify the `-b#` option. Doing this will run our benchmarking pipeline 222for that options you have just provided. If you want to look at the internals of how this 223benchmarking script works, you can check out programs/benchzstd.c 224 225For example: say you have made a change that you believe improves the speed of zstd level 1. The 226very first thing you should use to asses whether you actually achieved any sort of improvement 227is `zstd -b`. You might try to do something like this. Note: you can use the `-i` option to 228specify a running time for your benchmark in seconds (default is 3 seconds). 229Usually, the longer the running time, the more stable your results will be. 230 231``` 232$ git checkout <commit-before-your-change> 233$ make && cp zstd zstd-old 234$ git checkout <commit-after-your-change> 235$ make && cp zstd zstd-new 236$ zstd-old -i5 -b1 <your-test-data> 237 1<your-test-data> : 8990 -> 3992 (2.252), 302.6 MB/s , 626.4 MB/s 238$ zstd-new -i5 -b1 <your-test-data> 239 1<your-test-data> : 8990 -> 3992 (2.252), 302.8 MB/s , 628.4 MB/s 240``` 241 242Unless your performance win is large enough to be visible despite the intrinsic noise 243on your computer, benchzstd alone will likely not be enough to validate the impact of your 244changes. For example, the results of the example above indicate that effectively nothing 245changed but there could be a small <3% improvement that the noise on the host machine 246obscured. So unless you see a large performance win (10-15% consistently) using just 247this method of evaluation will not be sufficient. 248 249### Profiling 250There are a number of great profilers out there. We're going to briefly mention how you can 251profile your code using `instruments` on mac, `perf` on linux and `visual studio profiler` 252on windows. 253 254Say you have an idea for a change that you think will provide some good performance gains 255for level 1 compression on Zstd. Typically this means, you have identified a section of 256code that you think can be made to run faster. 257 258The first thing you will want to do is make sure that the piece of code is actually taking up 259a notable amount of time to run. It is usually not worth optimzing something which accounts for less than 2600.0001% of the total running time. Luckily, there are tools to help with this. 261Profilers will let you see how much time your code spends inside a particular function. 262If your target code snippit is only part of a function, it might be worth trying to 263isolate that snippit by moving it to its own function (this is usually not necessary but 264might be). 265 266Most profilers (including the profilers dicusssed below) will generate a call graph of 267functions for you. Your goal will be to find your function of interest in this call grapch 268and then inspect the time spent inside of it. You might also want to to look at the 269annotated assembly which most profilers will provide you with. 270 271#### Instruments 272We will once again consider the scenario where you think you've identified a piece of code 273whose performance can be improved upon. Follow these steps to profile your code using 274Instruments. 275 2761. Open Instruments 2772. Select `Time Profiler` from the list of standard templates 2783. Close all other applications except for your instruments window and your terminal 2794. Run your benchmarking script from your terminal window 280 * You will want a benchmark that runs for at least a few seconds (5 seconds will 281 usually be long enough). This way the profiler will have something to work with 282 and you will have ample time to attach your profiler to this process:) 283 * I will just use benchzstd as my bencharmking script for this example: 284``` 285$ zstd -b1 -i5 <my-data> # this will run for 5 seconds 286``` 2875. Once you run your benchmarking script, switch back over to instruments and attach your 288process to the time profiler. You can do this by: 289 * Clicking on the `All Processes` drop down in the top left of the toolbar. 290 * Selecting your process from the dropdown. In my case, it is just going to be labled 291 `zstd` 292 * Hitting the bright red record circle button on the top left of the toolbar 2936. You profiler will now start collecting metrics from your bencharking script. Once 294you think you have collected enough samples (usually this is the case after 3 seconds of 295recording), stop your profiler. 2967. Make sure that in toolbar of the bottom window, `profile` is selected. 2978. You should be able to see your call graph. 298 * If you don't see the call graph or an incomplete call graph, make sure you have compiled 299 zstd and your benchmarking scripg using debug flags. On mac and linux, this just means 300 you will have to supply the `-g` flag alone with your build script. You might also 301 have to provide the `-fno-omit-frame-pointer` flag 3029. Dig down the graph to find your function call and then inspect it by double clicking 303the list item. You will be able to see the annotated source code and the assembly side by 304side. 305 306#### Perf 307 308This wiki has a pretty detailed tutorial on getting started working with perf so we'll 309leave you to check that out of you're getting started: 310 311https://perf.wiki.kernel.org/index.php/Tutorial 312 313Some general notes on perf: 314* Use `perf stat -r # <bench-program>` to quickly get some relevant timing and 315counter statistics. Perf uses a high resolution timer and this is likely one 316of the first things your team will run when assessing your PR. 317* Perf has a long list of hardware counters that can be viewed with `perf --list`. 318When measuring optimizations, something worth trying is to make sure the handware 319counters you expect to be impacted by your change are in fact being so. For example, 320if you expect the L1 cache misses to decrease with your change, you can look at the 321counter `L1-dcache-load-misses` 322* Perf hardware counters will not work on a virtual machine. 323 324#### Visual Studio 325 326TODO 327 328 329## Setting up continuous integration (CI) on your fork 330Zstd uses a number of different continuous integration (CI) tools to ensure that new changes 331are well tested before they make it to an official release. Specifically, we use the platforms 332travis-ci, circle-ci, and appveyor. 333 334Changes cannot be merged into the main dev branch unless they pass all of our CI tests. 335The easiest way to run these CI tests on your own before submitting a PR to our dev branch 336is to configure your personal fork of zstd with each of the CI platforms. Below, you'll find 337instructions for doing this. 338 339### travis-ci 340Follow these steps to link travis-ci with your github fork of zstd 341 3421. Make sure you are logged into your github account 3432. Go to https://travis-ci.org/ 3443. Click 'Sign in with Github' on the top right 3454. Click 'Authorize travis-ci' 3465. Click 'Activate all repositories using Github Apps' 3476. Select 'Only select repositories' and select your fork of zstd from the drop down 3487. Click 'Approve and Install' 3498. Click 'Sign in with Github' again. This time, it will be for travis-pro (which will let you view your tests on the web dashboard) 3509. Click 'Authorize travis-pro' 35110. You should have travis set up on your fork now. 352 353### circle-ci 354TODO 355 356### appveyor 357Follow these steps to link circle-ci with your girhub fork of zstd 358 3591. Make sure you are logged into your github account 3602. Go to https://www.appveyor.com/ 3613. Click 'Sign in' on the top right 3624. Select 'Github' on the left panel 3635. Click 'Authorize appveyor' 3646. You might be asked to select which repositories you want to give appveyor permission to. Select your fork of zstd if you're prompted 3657. You should have appveyor set up on your fork now. 366 367### General notes on CI 368CI tests run every time a pull request (PR) is created or updated. The exact tests 369that get run will depend on the destination branch you specify. Some tests take 370longer to run than others. Currently, our CI is set up to run a short 371series of tests when creating a PR to the dev branch and a longer series of tests 372when creating a PR to the master branch. You can look in the configuration files 373of the respective CI platform for more information on what gets run when. 374 375Most people will just want to create a PR with the destination set to their local dev 376branch of zstd. You can then find the status of the tests on the PR's page. You can also 377re-run tests and cancel running tests from the PR page or from the respective CI's dashboard. 378 379## Issues 380We use GitHub issues to track public bugs. Please ensure your description is 381clear and has sufficient instructions to be able to reproduce the issue. 382 383Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe 384disclosure of security bugs. In those cases, please go through the process 385outlined on that page and do not file a public issue. 386 387## Coding Style 388* 4 spaces for indentation rather than tabs 389 390## License 391By contributing to Zstandard, you agree that your contributions will be licensed 392under both the [LICENSE](LICENSE) file and the [COPYING](COPYING) file in the root directory of this source tree. 393