Last Saturday, I participated in a data/hackathon focused on wildfire prevention. The challenge was as follows:
The topic of this hackathon will be wildfire prevention. Wildfires in Los Angeles are becoming increasingly severe, posing significant risks to communities, infrastructure, and natural habitats. As climate change continues to exacerbate these disasters, the demand for creative and effective solutions has never been greater. That is the core idea behind this hackathon: to harness AI-driven innovation in the fight against wildfires.
Although I had never trained an AI model before, I was curious to see how I would perform against the competition. To prepare, I watched a single video on how to approach hackathons.
The event was organized by a Dream-Team from the Technical University of Delft: Epoch. This datathon served as a recruitment opportunity for new members in the upcoming year.
My Experience
During the hackathon, I spent most of my time on data cleaning, whereas my competitors seemed more focused on feature engineering. This was likely because they had more experience and had prepared for the competition, while I was still learning on the fly.
Preparation would have been immensely helpful. Setting up a framework for data cleaning, handling missing values (NaN), and verifying date formats are common practices that could have saved me a lot of time.
Resources
If you’re interested, you can download the notebook I worked on during the hackathon: Download the notebook
(Be warned: it’s not pretty and isn’t particularly effective at its task.)
The competition was hosted on Kaggle, and the dataset used for training can be found here.
Final Results
I finished in last place 😅—so there’s still a long way to go! However, this experience has definitely sparked my curiosity about data science. I might explore more Kaggle competitions to improve my skills.