Post

Best Practices: Cheat Sheet

A small overview over useful python commands and good practices

Best Practices: Cheat Sheet

For an explanation of these best practices, please consider reading Tutorial 1 and Tutorial 2

I will update this page from time to time to include additional useful commands etc.

Version Control

  • Use git!
  • Commit frequently
  • You can define a .gitignore (in any directory) to exclude files from version control (e.g. for excluding a directory with plots)
  • Use nbstripout for notebooks under version control to avoid bloat

nbstripout

1
2
3
4
5
pip install nbstripout
# in your code-repository:
nbstripout --install --attributes .gitattributes
# To deactivate, e.g. when you want to publish a notebook with plots
# nbstripout --uninstall --attributes .gitattributes

Jupyter notebooks

  • Extract code to modules frequently
  • Extract tests to unit tests
  • Reloading modules without restarting notebooks:
1
2
%load_ext autoreload
%autoreload 2
  • To create a pdf or other formats from your notebook (requires installing a bunch of things). See this page
    1
    
    jupyter nbconvert --to pdf mynotebook.ipynb
    

Repository Layout

1
2
3
4
5
6
7
8
9
10
11
12
13
├── LICENSE
├── README.md
├── pyproject.toml
├── src
│   ├── spherical_potential
│   │   ├── __init__.py
│   │   ├── analytical_profiles.py
│   │   └── integrals.py
├── notebooks
│   └── potentials.ipynb
├── tests
│   ├── test_integrals.py
│   └── test_something_else.py

pyproject.toml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[build-system]
requires = ["setuptools>=68", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name = "spherical-potential"
version = "0.1.0"
description = "A library for calculating the potential of spherical mass distributions"
readme = "README.md"
requires-python = ">=3.9"
license = { file = "LICENSE" }
authors = [{ name = "[Your Name]" }]  # Fill in your name!
dependencies = ["numpy"]

[tool.setuptools.packages.find]
where = ["src"]

Check here for a detailed guide

Unit tests

  • Put your tests under a “tests” directory
  • Name files with a test “test_” or “_test.py”
  • Each function with name “test_” or “_test” defines a unit-test
  • Execute your test from the command line:
    1
    2
    3
    
    pytest -v 
    # or
    pytest -v -s # contains stdout
    
  • To setup VSCode for unit tests. Follow this guide or simply create .vscode/settings.json with these settings:
    1
    2
    3
    4
    5
    6
    7
    8
    
    {
      "python.testing.pytestArgs": [
          "tests",
          "--color=yes"
      ],
      "python.testing.unittestEnabled": false,
      "python.testing.pytestEnabled": true
    }
    
  • A useful template:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import numpy as np
import pytest

def test_that_passes():
    pass

def test_that_fails():
    assert False

def test_comparing_arrays():
    x = np.linspace(0., 1., 101)
    x2 = np.copy(x)
    x2[50] += 0.2

    assert x2 == pytest.approx(x, rel=1e-3)

@pytest.mark.parametrize("a", [1e-3, 1e-2, 1e-1])
def test_with_parameters(a):
    np.random.seed(42)
    x = np.linspace(0., 1., 100)
    x2 = x + np.random.uniform(-a, a, size=x.shape)

    assert x2 == pytest.approx(x, abs=1e-2)
Show Output
============================= test session starts ==============================
platform linux -- Python 3.12.11, pytest-8.4.2, pluggy-1.6.0 -- /home/jens/repos/spherical-potential/.venv/bin/python3
cachedir: .pytest_cache
rootdir: /home/jens/repos/spherical-potential
configfile: pyproject.toml
collecting ... collected 6 items

tests/test_example.py::test_that_passes PASSED                           [ 16%]
tests/test_example.py::test_that_fails FAILED                            [ 33%]
tests/test_example.py::test_comparing_arrays FAILED                      [ 50%]
tests/test_example.py::test_with_parameters[0.001] PASSED                [ 66%]
tests/test_example.py::test_with_parameters[0.01] PASSED                 [ 83%]
tests/test_example.py::test_with_parameters[0.1] FAILED                  [100%]

=================================== FAILURES ===================================
_______________________________ test_that_fails ________________________________

    def test_that_fails():
>       assert False
E       assert False

tests/test_example.py:8: AssertionError
____________________________ test_comparing_arrays _____________________________

    def test_comparing_arrays():
        x = np.linspace(0., 1., 101)
        x2 = np.copy(x)
        x2[50] += 0.2
    
>       assert x2 == pytest.approx(x, rel=1e-3)
E       assert array([0.  , ...  0.99, 1.  ]) == approx([0.0 ±... 1.0 ± 0.001])
E         
E         comparison failed. Mismatched elements: 1 / 101:
E         Max absolute difference: 0.19999999999999996
E         Max relative difference: 0.28571428571428564
E         Index | Obtained | Expected     
E         (50,) | 0.7      | 0.5 ± 5.0e-04

tests/test_example.py:15: AssertionError
__________________________ test_with_parameters[0.1] ___________________________

a = 0.1

    @pytest.mark.parametrize("a", [1e-3, 1e-2, 1e-1])
    def test_with_parameters(a):
        np.random.seed(42)
        x = np.linspace(0., 1., 100)
        x2 = x + np.random.uniform(-a, a, size=x.shape)
    
>       assert x2 == pytest.approx(x, abs=1e-2)
E       AssertionError: assert array([-0.025...  0.92157829]) == approx([0.0 ±..., 1.0 ± 0.01])
E         
E         comparison failed. Mismatched elements: 90 / 100:
E         Max absolute difference: 0.09889557657527948
E         Max relative difference: 18.701600075260114
E         Index | Obtained              | Expected                   
E         (0,)  | -0.025091976230527502 | 0.0 ± 0.01                 
E         (1,)  | 0.10024387138299333   | 0.010101010101010102 ± 0.01...
E         
E         ...Full output truncated (88 lines hidden), use '-vv' to show

tests/test_example.py:23: AssertionError
=========================== short test summary info ============================
FAILED tests/test_example.py::test_that_fails - assert False
FAILED tests/test_example.py::test_comparing_arrays - assert array([0.  , ...  0.99, 1.  ]) == approx([0.0 ±... 1.0 ± 0.001])
FAILED tests/test_example.py::test_with_parameters[0.1] - AssertionError: assert array([-0.025...  0.92157829]) == approx([0.0 ±..., ...
========================= 3 failed, 3 passed in 0.11s ==========================

Optimizing jax.jit

Have a look, here

This post is licensed under CC BY 4.0 by the author.