ApplicationsLess than 5
Interview RequestsLess than 5
We are building the world’s first universal risk rating for real estate (think of us as the Experian for the global real estate industry). We’ve recently raised a £3.3 million seed round from LocalGlobe and some of the largest property companies in the world, during the height of the pandemic, due to our strong revenue growth and product/market fit. Our customers include the Magic Circle law firms, property developers and insurance firms, as well as household names such as Marks & Spencer and Transport for London.
We’re now rapidly growing our engineering and data science team and would like to hire ambitious people who aren’t afraid to work on hard problems, and want to help us deliver against our mission. At Orbital Witness we want to help anyone involved in a property transaction to properly understand what they are getting into from the outset before incurring legal fees. To do this we’re building a “universal risk rating for real estate” having validated the concept and received support from industry. We’re seizing an opportunity that no-one else is truly taking on and bringing property diligence into the 21st century.
In order to achieve the above we’re creating a brand new team for this greenfield project. We already have another 8 person team building out our existing legal product which has strong product/market fit and this team will be separate from them (aka two teams with two different product strategies). On this team so far we have a Tech Lead, Product Manager, Lead Data Scientist, Data Scientist and a Backend Engineer and are hiring another 3 people to fill the remaining 1 x Data Engineering and 2 x Backend Engineer roles.
We currently have a small PostgreSQL-based data warehouse fed by Azure Data Factory pipelines running against our production databases. Your role as a Data Engineer would involve taking a fresh look at our data infrastructure and likely migrating towards Airflow as the central orchestrator of ETL processing as well as machine learning training jobs. We’re interested in using tools like Stitch or Fivetran to automate data ingestion from disparate sources. We’re looking for someone to take ownership of these systems end-to-end to create reliable pipelines that support business intelligence use-cases as well as data science applications as we scale.
Our backend stack will likely involve Python, PostgreSQL, Redis, AWS services (such as SQS, Kinesis, DynamoDB), Terraform, Docker, and others. On the data science side we currently use PyTorch, Hugging Face implementations of BERT, flairNLP, scikit-learn, Azure OCR, and Flask. We follow a structured Kanban process with OKRs, weekly goals, epics/stories and minimal upfront estimation.
•Competitive salary, pension contributions, and equity options;
•Flexible working environment;
•25 days paid holiday (plus bank holidays);
•Company laptop and personal development budget;
•An inclusive community enjoying all-company offsites, lunches, and socials