Team Fast-tabulous was formed for the Queensland AI’s Fast.ai course project. Our focus is on tabular data using the Homesite Competition data from Kaggle. We’re made up of enthusiastic learners comprising
App was deployed on an Amazon AWS spot instance. Final app source code available at https://github.com/timcu/fast-tabulous-app/blob/main/fast-tabulous-with-db.ipynb](https://github.com/timcu/fast-tabulous-app/blob/main/fast-tabulous-with-db.ipynb) . Final app can be used at [https://tabulous.pythonator.com
Jul 30, 2021
Here I use fastai with as many defaults as possible to make a "first-pass" submission to Kaggle for the Homesite competition.
Jul 29, 2021
Here I use fastai with some changes to the defaults to make another submission to Kaggle for the Homesite competition.
Jul 29, 2021
A basic comparison of predictions when using an off the shelf sklearn classifier verus regressor, with different criterion to split decision trees. The data is taken from the Homesite Competition on Kaggle.
Jul 21, 2021
Once the TabularLearner which contains a TabularModel has been trained on a GPU, we no longer require the GPU as predictions work on much smaller amounts of data. However, moving the TabularLearner to a CPU is not straightforward. This post shows you how.
Jul 19, 2021
This notebook loads a previously trained model and uses it to predict quote success rate using user input to change fields. User input uses ipywidgets generated on the fly to match allow altering of the most sensitive fields. Final app source code available at https://github.com/timcu/fast-tabulous-app/blob/main/fast-tabulous-with-db.ipynb](https://github.com/timcu/fast-tabulous-app/blob/main/fast-tabulous-with-db.ipynb) . Final app can be used at [https://tabulous.pythonator.com
Jul 13, 2021
This notebook loads a previously trained model and uses it to predict quote success rate. Then any quote can be chosen (from train or test) and a sensitivity analysis will determine which fields can be changed to turnaround quote success.
Jul 12, 2021
We used Fastai Tabular libary and WalkWithFastai functions to build our Deep Learning model for HomeSite Quote Conversion competition in Kaggle. The techniques used in this notebook include Permutation Importance Analysis, Model Ensembling, Bayesian Optimisation for hyperparameter tuning and Entity Embddings.
Jul 8, 2021
This notebook uses Fast AI library for EDA and Optuna for hyperparameter tuning on Kaggle Homesite Quote Conversion Data set.
Jul 4, 2021
Here I borrow generously from Zach's notebook
Jun 29, 2021
Here I improve on fastai's `cont_cat_split` function and add some changes to the defaults to make another submission to Kaggle for the Homesite competition.
Jun 27, 2021
Here I improve on fastai's `cont_cat_split` function and add some changes to the defaults to make another submission to Kaggle for the Homesite competition.
Jun 27, 2021
This is a basic random forest model for the purposes of data exploration and establishing a basline performance which can be referenced when more advanced models are used. At the end of this notebook, the most important columns are identified.
Jun 26, 2021
This notebook applies 4 different python EDA Packages (Pandas Profiling, Sweetviz, Dtale and Autoviz) on Kaggle Homesite Quote Conversion Data set (~200k rows and 300 columns)
Jun 26, 2021
Synthethic data work by Jorge
Jun 25, 2021