Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We tried to keep our testing workflow as flexible as possible. There's a couple of use-cases that we wanted to allow:

- User is working with a pre-trained model that already went through extensive testing during training. In this case our test utilities are useful as e2e tests. Once you integrate the model into your handler, you can specify a bunch of test cases to be sure your API is going to behave as expected (like a unit test).

- User wants to train the model on our platform - they can add error metrics directly in their training script and prevent the model from being saved if any error metric exceeds a certain threshold. They can then additionally use the test.py script to run tests against the model + handler.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: