Skip to content

Commit 84064ea

Browse files
authored
Merge pull request #37 from mmesiti/day8-questions
Add questions from day 8
2 parents aa47ebd + a7b49eb commit 84064ea

File tree

1 file changed

+176
-0
lines changed

1 file changed

+176
-0
lines changed

content/questions/day8.md

Lines changed: 176 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,176 @@
1+
+++
2+
template = "page-with-toc.html"
3+
title = "Questions and notes from workshop day 8"
4+
+++
5+
6+
## Icebreaker
7+
8+
- What programming language(s) are you normally working with (add an o for your answer):
9+
- Python: oooooooooooo
10+
- R: ooo
11+
- Julia: oo
12+
- C++: oooo
13+
- Fortran: ooo
14+
- Bash: oooo
15+
- CUDA: o
16+
- HIP
17+
- (add yours here)...
18+
19+
- What does testing your code mean for you? How do you test?
20+
21+
- Craft a dummy data set where i can easily follow what my code is doing to it and verify it is working well. +1
22+
- Test = automatic test: manual test -> too expensive -> do it too rarely
23+
- small tests are nicer to work with than big ones +1
24+
- big ones tend to be more useful but a pain to work with
25+
- Run > fix error > run > fix error, etc. +1
26+
- Test examples and small features using CI capabilities on github
27+
- Test suite with Python unittest, and use a Github action to run it
28+
- (add your comments here)...
29+
30+
## Automated testing
31+
Material: https://coderefinery.github.io/testing/
32+
33+
34+
## Motivations
35+
Material: https://coderefinery.github.io/testing/motivation/
36+
Your questions and comments here:
37+
38+
1. How to (approximately) measure code coverage? If you have a function, and a single test invoking that funciton, you compute it as +1 function tested?
39+
- there are tools for that, that tell you "line" coverage and "function" coverage. "line" coverage is the ration between total code line of your program (excluded empty lines and comments) and lines that were executed during a run of the test suite. It's not a perfect metric of course (and be aware of Goodhart's law: https://en.wikipedia.org/wiki/Goodhart%27s_law).
40+
- function coverage is - I think - the ratio between functions in your code and functions that were executed during a run of the test suite.
41+
42+
2. After running pytest, i also got two directories, one called `__pycache__` and another called .pytest_cache. Could you comment a bit on their use and purpose?
43+
- `__pycache__` is a directory that contains files that python (not pytest in particular) produces during the execution, and might be reused later when python runs again, saving time (it can contain, e.g., bytecode-compiled python code and things like that). This is the general idea of a "cache"
44+
- `.pytest_cache` is a cache directory specific to pytest
45+
- caches are not necessary, you can delete them and everything will work as expected, albeit a little slower because all the stuff in the cache might need to be recreated.
46+
47+
## Testing locally
48+
Material: https://coderefinery.github.io/testing/locally/
49+
50+
:::info
51+
## Exercise - Until xx:54
52+
Instructions: https://coderefinery.github.io/testing/locally/#exercise
53+
54+
Progress report... are you (mark with an 'o' or any letter):
55+
- done: ooooo
56+
- need more time:
57+
- had major problems:
58+
59+
If you have problems and / or want to talk, you might join the zoom help room (information on how to join it has been shared in emails sent in the past)
60+
:::
61+
62+
63+
3. In the excersise, when we change to "-", is there a way for pytest to test all asserts, i.e. not to stop at the first one that does not fulfill condition?
64+
- You could put each assert in its own test function.
65+
66+
4. Can this testing heavily influence complexity of code, when dealing with e.g. fancy ML/DL models? Is it demanding in processing?
67+
- Not sure I understand the question.
68+
- Ideally test are lightweight enough, that you can work with them interactively. So you can, quickly iterate quickly, code -> (compile) -> test -> fix resulting problems, repeat. That's not always possible, however. For ML models, you rarely run a full training that can take hours or days as a test as part of an automatic development pipeline. You do still evaluate your model of course, but not in the context of automated software testing
69+
5. How the testing works? Does it simply runs all functions even when they aren't called?
70+
- pytest - the framework we are using - inspects your files (even directories) and will find all the functions that start with `test_` or end with `_test`, and call them for you. If you tell pytest to "test a directory", then it will look for all files that start or end with "test". In this sense, we may say that "registering" a test just requires you to name a function appropriately.
71+
- in other languages one has to explicitly "register" tests using macros (see Catch2/Google Test approaches), and then the testing framework does the work of calling all the registered tests.
72+
73+
6. How it differs from "try"?
74+
- "try", for catching exceptions, is something that you expect to be used at run time, when your software is doing actual work, and is encountering an error that you want to manage (and you expect it to happen sometimes). Here, we are creating test cases - when we run test cases, we verify assumptions about how our software works.
75+
76+
77+
## Automated testing
78+
79+
Material: https://coderefinery.github.io/testing/continuous-integration/
80+
81+
Will be done as a walkthrough: https://coderefinery.github.io/testing/continuous-integration/#continuous-integration
82+
83+
Please post your questions and comments here:
84+
7. Bonus question: why is the job on Johan's github actions failing?
85+
8. Would it make sense to split the github action job between those parts who succeeded and the parts who failed? They do different things, right?
86+
- It can, but since creating the coverage report requires running the tests, you would be running some things multiple times.
87+
88+
:::info
89+
## Break until xx:38
90+
91+
:::
92+
93+
## Test design
94+
Material: https://coderefinery.github.io/testing/test-design/
95+
96+
::: info
97+
## Exercise until xx:02
98+
Have a look at the list of exercises in this episode:
99+
https://coderefinery.github.io/testing/test-design/
100+
101+
Suggestion: start with the initial ones, which are simpler
102+
(and fundamental).
103+
104+
Progress report... are you (mark with an 'o' or any letter):
105+
- done:
106+
- need more time:
107+
- had major problems:
108+
:::
109+
110+
Questions, continued:
111+
112+
9. Design 3 solution throws PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\M\\AppData\\Local\\Temp\\tmpd6relgrg'.
113+
- Could it be that the file is opened in an editor? Nope.
114+
- From the file path, I think it's a temporary file created in the test. It's really weird that the test does not have access to it. Is the same filename used in multiple tests, maybe?
115+
- I had the same issue, commenting the os.remove line worked for me, but I still don't understand the error.
116+
- I confirm this.
117+
118+
10. How would I write a test to make sure the correct error is raised when an incorrect input is passed?
119+
- depending on the language/testing framework, there are different tools to do that.
120+
- In python you can do something like
121+
```
122+
try:
123+
your_code()
124+
assert False "Exception not raised"
125+
except ExceptionYoureExpecting:
126+
pass
127+
except Exception:
128+
assert False "Wrong exception raised"
129+
```
130+
But for example in pytest there's a context manager (`with ....`)
131+
called "pytest.raises()", which does that job in a better way than reimplementing it yourself
132+
- other languages/testing frameworks have similar facilities that do this for you, even if you could write out the logic yourself
133+
- Ok, many thanks! I had python (pytest) in mind, and raises() seems to be what I was looking for.
134+
135+
:::info
136+
### Catch2 example, see https://coderefinery.github.io/testing/locally/
137+
138+
:::
139+
140+
141+
## Feedback
142+
143+
:::info
144+
We hope you got an impression what automated testing can be useful for and how it can be implemented for different programming languages. The lesson materials will continue to be available for reference.
145+
146+
Join https://coderefinery.zulipchat.com/ to ask further questions and meet the instructors and rest of the team.
147+
148+
### Outlook for the next lesson: Modular Code Development
149+
It will tie the whole workshop together and we will revisit almost all of the topics of the full workshop and apply them.
150+
- Great, looking forward.
151+
152+
:::
153+
154+
Today was (vote for all that apply):
155+
- too fast:
156+
- too slow:
157+
- right speed: o
158+
- too slow sometimes, too fast other times: o
159+
- too advanced:
160+
- too basic:
161+
- right level: oo
162+
- I will use what I learned today:oo
163+
- I would recommend today to others:oo
164+
- I would not recommend today to others:
165+
166+
One good thing about today:
167+
- Great overview of different types of testing, and hands on practicals very useful +1
168+
- Presenters were good, yet Oscar could try to type a bit slower (to ease the following). :)
169+
170+
One thing to improve for next time:
171+
- Interesting topic, but I would recommend to devote more time, at least another hour or two, to go slowly and finish everything that is prepared.
172+
173+
Any other feedback?
174+
- Maybe stick to one programming language (e.g. python), and devote extra day to other languages.
175+
- How about having a topic on debugging and best practices of debugging?! I somehow lack that and miss that in the workshop.
176+

0 commit comments

Comments
 (0)