Senior Python Developer
Descrição da Vaga
**Job Title** : Senior Python Developer **Reports to** : Lead Machine Learning Engineer Intelligent Audit is a fast\-growing freight audit \& business analytics technology company helping our customers become smarter shippers \- shipping to their customers faster, cheaper, and with less delivery exceptions. We use big data to help our customers remove inefficiencies in their global transportation spend. As a **Senior Python Developer** , you will build and maintain backend systems focused on automating internal processes and supporting the Data Science team. Your primary responsibility will be developing scalable web scraping solutions and data pipelines to extract, transform, and load data from multiple external sources. **What You Will Do:** *Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. The individual with this position in our company will be expected, on a regular basis, to:* * **Process Automation \& Backend Development** + Design and develop automated systems for internal process optimization. + Build scalable backend services that integrate with existing infrastructure. + Ensure services are reliable, performant, and maintainable. * **Web Scraping \& Data Extraction** + Develop and maintain web scraping solutions for multiple external websites. + Handle dynamic content, rate limiting, authentication, and anti\-scraping measures. + Implement robust error handling, logging, and retry mechanisms. * **Data Manipulation \& Processing** + Clean, transform, and process large volumes of structured and unstructured data. + Build and maintain data pipelines to deliver usable datasets to downstream systems. * **Monitoring \& Observability** + Implement monitoring for system performance, data quality, and operational metrics (e.g., Datadog). + Create dashboards and alerts to ensure system reliability and data integrity. * **Data Science Team Support** + Provide backend infrastructure, data pipelines, and tooling for Data Science projects. + Collaborate with data scientists to deliver data and services that enable experimentation and model deployment. * **Code Quality \& Best Practices** + Apply test\-driven development and maintain solid unit/integration test coverage. + Perform code reviews and help drive coding standards and best practices. + Follow best practices in Python packaging, dependency management (e.g., uv, conda), and documentation. * **CI/CD \& Infrastructure** + Design and maintain CI/CD pipelines (e.g., GitHub Actions) for automated testing and deployment. + Work with DevOps to support containerization and orchestration (Docker, Kubernetes). * **Lifecycle Management \& Continuous Improvement** + Own the software lifecycle from development through deployment and maintenance. + Identify opportunities to reduce technical debt and improve performance and reliability. **What You Will Bring:** * **Technical Expertise** + Advanced Python proficiency with experience in backend frameworks (e.g., FastAPI). + Strong understanding of HTTP, RESTful API design, and general web technologies. + Experience working in Linux\-based environments. * **Web Scraping \& Automation** + Hands\-on experience with web scraping tools and libraries (e.g., BeautifulSoup, Scrapy, Selenium, Playwright). + Ability to handle JavaScript\-rendered content, cookies, sessions, and complex authentication flows. * **Data Manipulation** + Strong experience with Python data libraries (e.g., pandas, polars, NumPy). + Strong SQL skills and familiarity with common data formats (JSON, XML, CSV, HTML). * **Monitoring \& Observability** + Experience with monitoring and observability tools (e.g., Datadog). + Ability to define useful metrics, logs, and dashboards, and configure alerting. * **Database \& Data Storage** + Experience with relational databases (e.g. PostgreSQL). + Familiarity with ETL/ELT concepts and data warehousing basics. * **CI/CD \& DevOps Collaboration** + Experience with automated build/test/deploy pipelines and containerized workloads. + Familiarity with Docker and Kubernetes. * **Data Science Collaboration** + Understanding of data science workflows and model deployment needs. + Experience building tools and services that support ML and analytics use cases. **Minimum Qualifications:** * Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field, or equivalent practical experience. * 5\+ years of professional Python backend development experience. * 3\+ years of experience with web scraping and data extraction from multiple sources. * Demonstrated experience building scalable backend systems and automated processes. * Experience with monitoring and observability tools (Datadog or similar). * Strong SQL skills and experience with relational databases. * Proficiency with Linux, Git, and Bash command\-line tools. **Preferred Characteristics:** * Experience with asynchronous and concurrent processing (e.g., asyncio, Celery). * Prior exposure to logistics, freight, or supply chain domains. * Experience with distributed systems and microservices architectures. * Familiarity with data engineering tools and practices (e.g., dbt, Airflow, Prefect). * Experience building APIs and services that integrate with machine learning models. * Contributions to open\-source projects related to web scraping, data engineering, or backend automation. * Experience using LLM\-powered development tools (e.g., Cursor, GitHub Copilot, ChatGPT) in daily workflows.
Vaga originalmente publicada em: linkedin
💼 Encontre as melhores oportunidades para desenvolvedores no Job For Dev