Lines Matching refs:your
14 We actively welcome your pull requests.
16 1. Fork the repo and create your branch from `dev`.
20 5. Make sure your code lints.
24 In order to accept your pull request, we need you to submit a CLA. You only need
27 Complete your CLA here: <https://code.facebook.com/cla>
37 * Checkout your fork of zstd if you have not already
42 * Update your local dev branch
48 * Make a new branch on your fork about the topic you're developing for
60 * Note: run local tests to ensure that your changes didn't break existing functionality
71 …* Before sharing anything to the community, create a pull request in your own fork against the dev…
73 … * Ensure that static analysis passes on your development machine. See the Static Analysis section
76 … * When you are ready to share you changes to the community, create a pull request from your branch
77 … to facebook:dev. You can do this very easily by clicking 'Create Pull Request' on your fork's home
79 … * From there, select the branch where you made changes as your source branch and facebook:dev
91 …* Note: if you have been working with a specific user and would like them to review your work, mak…
95 … * You will have to iterate on your changes with feedback from other collaborators to reach a point
96 where your pull request can be safely merged.
99 …* Eventually, someone from the zstd team will approve your pull request and not long after merge i…
102 … * Most PRs are linked with one or more Github issues. If this is the case for your PR, make sure
103 the corresponding issue is mentioned. If your change 'fixes' or completely addresses the
105 …* Just because your changes have been merged does not mean the topic or larger issue is complete. …
108 … their change makes it to the next release of zstd. Users will often discover bugs in your code or
109 … suggest ways to refine and improve your initial changes even after the pull request is merged.
114 static analysis. You can install it by following the instructions for your OS on https://clang-anal…
116 Once installed, you can ensure that our static analysis tests pass on your local development machine
155 … on GitHub Actions (configured at `.github/workflows`), which will automatically run on PRs to your
157 These require work to set up on your local fork, and (at least for Travis CI) cost money.
158 Therefore, if the PR on your local fork passes GitHub Actions, feel free to submit a PR
164 these up on your fork in order to contribute to zstd; however, we do link to instructions for those
175 The general idea should be the same for setting up CI on your fork of zstd, but you may have to
188 very well documented via past Github issues and pull requests. It may be the case that your
190 time to search through old issues and pull requests using keywords specific to your
198 benchmarks on your end before submitting a PR. Of course, you will not be able to benchmark
199 your changes on every single processor and os out there (and neither will we) but do that best
214 benchmarking machine. A virtual machine, a machine with shared resources, or your laptop
215 will typically not be stable enough to obtain reliable benchmark results. If you can get your
220 noise. Here are some things you can do to make your benchmarks more stable:
222 1. The most simple thing you can do to drastically improve the stability of your benchmark is
227 * How you aggregate your samples are important. You might be tempted to use the mean of your
230 outliers whereas the median is. Better still, you could simply take the fastest speed your
231 benchmark achieved on each run since that is likely the fastest your process will be
232 capable of running your code. In our experience, this (aggregating by just taking the sample
234 * The more samples you have, the more stable your benchmarks should be. You can verify
235 your improved stability by looking at the size of your confidence intervals as you
236 increase your sample count. These should get smaller and smaller. Eventually hopefully
241 address is directly by simply not including the first `n` iterations of your benchmark in
242 your aggregations. You can determine `n` by simply looking at the results from each iteration
244 2. You cannot really get reliable benchmarks if your host machine is simultaneously running
245 another cpu/memory-intensive application in the background. If you are running benchmarks on your
246 personal laptop for instance, you should close all applications (including your code editor and
247 browser) before running your benchmarks. You might also have invisible background applications
250 * If you have multiple cores, you can even run your benchmark on a reserved core to prevent
252 on your OS:
257 Dynamically linking your library will introduce some added variation (not a large amount but
269 The fastest signal you can get regarding your performance changes is via the in-build zstd cli
270 bench option. You can run Zstd as you typically would for your scenario using some set of options
278 specify a running time for your benchmark in seconds (default is 3 seconds).
279 Usually, the longer the running time, the more stable your results will be.
282 $ git checkout <commit-before-your-change>
284 $ git checkout <commit-after-your-change>
286 $ zstd-old -i5 -b1 <your-test-data>
287 1<your-test-data> : 8990 -> 3992 (2.252), 302.6 MB/s , 626.4 MB/s
288 $ zstd-new -i5 -b1 <your-test-data>
289 1<your-test-data> : 8990 -> 3992 (2.252), 302.8 MB/s , 628.4 MB/s
292 Unless your performance win is large enough to be visible despite the intrinsic noise
293 on your computer, benchzstd alone will likely not be enough to validate the impact of your
301 profile your code using `instruments` on mac, `perf` on linux and `visual studio profiler`
311 Profilers will let you see how much time your code spends inside a particular function.
312 If your target code snippet is only part of a function, it might be worth trying to
317 functions for you. Your goal will be to find your function of interest in this call graph
323 whose performance can be improved upon. Follow these steps to profile your code using
328 3. Close all other applications except for your instruments window and your terminal
329 4. Run your benchmarking script from your terminal window
332 and you will have ample time to attach your profiler to this process:)
337 5. Once you run your benchmarking script, switch back over to instruments and attach your
340 * Selecting your process from the dropdown. In my case, it is just going to be labeled
343 6. You profiler will now start collecting metrics from your benchmarking script. Once
345 recording), stop your profiler.
347 8. You should be able to see your call graph.
349 zstd and your benchmarking script using debug flags. On mac and linux, this just means
350 you will have to supply the `-g` flag alone with your build script. You might also
352 9. Dig down the graph to find your function call and then inspect it by double clicking
366 of the first things your team will run when assessing your PR.
369 counters you expect to be impacted by your change are in fact being so. For example,
370 if you expect the L1 cache misses to decrease with your change, you can look at the
379 We use GitHub issues to track public bugs. Please ensure your description is
389 similar lines of codes around your contribution.
488 By contributing to Zstandard, you agree that your contributions will be licensed