-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Hi Eivind and Vijay -
In preparation for a 0.9.3.5 release before the end of the quarter, I wanted to ask if you have automated any of the benchmarking tables that you have produced in places such as https://github.com/emeyers/Brain-Observatory-Toolbox/wiki/Live-Script-Performance-Overview ?
If not, I thought it might be nice to add a bot.test.* namespace that runs those files to a) verify there are no errors and that b) measures the time in order to make the benchmark table. As an example, I'm imagining that one file might be bot.test.suite(PLATFORM), say bot.test.suite('Matlab Online') runs the full Matlab Online suite, and bot.test.suite('local') might run the suite on a local machine, etc.
What do you think? Or is this already in there somewhere that I've missed?
Best
Steve