The notebook in this repo served two goals:
- As a playground for continue learning Panel (following the awesome PyData tutorial by James Bednar, Panel: Dashboards for PyData (Austin 2019))
- To share an app demonstrating the effect of colormaps on perception (and on the ability to see fault edges on a seismic horizon)
The first version of the app was presented as a lightning talk at the Transform 2020 virtual conference organized by Software Underground; you can watch a video recording of the presentation here.
To create the conda environment for this tutorial run:
conda env create -f environment.yml
Don't worry if you're new to this — it's easier than it looks!
- Click the "launch binder" badge above (the rectangular button with the orange logo)
- A new browser tab will open showing a loading screen
- Be patient! The first time may take 1-3 minutes while Binder builds the environment
- You'll see a progress log — this is normal
- Once ready, you'll see JupyterLab — a coding environment that runs in your browser
- The notebook file (
Demonstrate_colormap_distortions_interactive_Panel.ipynb) should already be open - If not, double-click on the notebook file in the left sidebar to open it
- Look at the menu bar at the top of the screen
- Click on Run (in the menu bar)
- Select Run All Cells from the dropdown menu
- Alternatively, you can use the keyboard shortcut: hold
Shiftand pressEnterrepeatedly to run cells one by one
- The notebook will execute each code cell from top to bottom
- You may see some output appearing as cells run
- Scroll down to the bottom of the notebook — the interactive app will appear there
- This may take 15-30 seconds after running all cells
- You'll see dropdown menus to select different colormaps
- Change the colormap selection and watch the plots update
- Compare how different colormaps affect the visualization
- If Binder takes too long: Sometimes Binder servers are busy. Try again in a few minutes.
- If you see errors: Try clicking Kernel → Restart Kernel and Run All Cells from the menu
- If the app doesn't appear: Make sure you scrolled all the way to the bottom of the notebook
If you would like some background, please read Crameri et al., 2020, The misuse of colour in science communication. Nat Commun 11, 5444 , and my Society of Exploration Geophysicists tutorial Evaluate and compare colormaps.
The app includes 5 colormap collections to explore:
| Collection | Description | Examples |
|---|---|---|
| matplotlib | Standard Matplotlib colormaps | viridis, plasma, jet, rainbow, cubehelix |
| colorcet | Peter Kovesi's perceptually uniform colormaps | cet_fire, cet_rainbow, cet_bgy |
| mycarta | My custom perceptual colormaps (background) | matteo_cube, matteo_cubeYF, matteo_linear_L |
| crameri | Fabio Crameri's scientific colormaps - perceptually uniform, colorblind-friendly | batlow, roma, vik, hawaii, oslo |
| cmocean | Kristen Thyng's oceanography colormaps - perceptually uniform | thermal, haline, deep, solar, ice |
The idea behind this app is to allow comparing any from a wide variety of colormaps to a good perceptual benchmark. As such:
-
In the top row, I chose grayscale as reference perceptually uniform colormap. (N.B. not all grayscale colormaps are actually truly, 100% perceptually uniform, when you plot Lightness, but this is a decent approximation).
-
The left column is purely for visual reference of the data with grayscale (top) vs. data with colormap (bottom). (N.B. I plan at some point to add an option to show the deuteranope simulation as an extra).
-
The real comparison is done in the middle column (it should be fairly intuitive) by only showing intensity with monochromatic palette for the grayscale (top) and colormapped (bottom). A perceptual colormap with uniform incremental contrast at bottom would look like the benchmark at top. Below I am showing an example for Viridis:
- The right column uses Sobel edge detection to enhance the visibility of potential artifacts caused by non perceptual colormaps. Additionally, edge detection is typically an interpretation product, so it is a good way to show what to expect, and how artifacts have an effects a real-world workflow. Below is an example with npy_spectral, highlighting scarps (continuous arrow) and plateaus (dashed arrow):
- As further evidence, please compare the hillshsade versions with contours:
- The interesting thing to me is that, according to this intensity-based tool, even a perceptual version of the rainbow (cet-rainbow) still has some issues. They are subtle, but they are definitely there, like the thin white strips (caused by yellow hard edges), indicated by yellow arrows and the red with its artificial decrease in intensity (indicated by purple arrow) giving the impression of lows where there should be highs:
For more background and insights on colormap perception effects, check out my blog post: Busting bad colormaps with Python and Panel
This work is licensed under a CC BY Creative Commons License, with the exception of the data used (a seismic horizon from the Penobscot 3D which is covered by a CC BY-SA Creative Commons License).







