Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.



411 University St, Seattle, USA


+1 -800-456-478-23

Join our team

If you are curious, dedicated and ready to tackle challenges, you have come to the right place. We are always looking for new talent to enhance our problem-solving capabilities. Currently hiring for Data Analysts, Cloud Data Engineer, Power BI Developer.

Reach out to us with your resume on  careers@incredotech.com

    Apply Now

    Your email address will not be published. Required fields are marked *

    Apply for

    Attach Resume

    Current Openings

    • Provide support to Business and Data Analyst(s) in gathering and/or clarifying data and reporting requirements from business owners
    • Develop dashboards/reports in accordance with client requirements using Power BI, adhering to best practices
    • Experience with connecting multiple sources with Power BI both on cloud and on premise and well aware off (with hands-on experience) aspects like Disaggregating and Aggregating data, Transformation functions, Subscription, Power BI embedded, Sharing & collaboration, Data security, Data alert, Cortana
    • Develop SQL /DAX queries and support ad hoc requests for data, proficient in power queries
    • Deployment of Power BI reports and Dashboards
    • Identify and troubleshoot business process, data quality or performance issues that show up in the reports and subsequently communicate to resolve and eliminate problems
    • Map integration for native as well as 3rd party providers
    • Develop high performing, reliable, and scalable solutions


    Preferred Background

    • Minimum 5 years total experience and 2+ relevant experience
    • Experience in utilizing report writing best practices
    • Experience with manual testing to include User Acceptance Testing (UAT)
    • Knowledge of DAX and ability to use complex expressions to calculate, group, filter, parameterize, optimize and format custom dashboards/reports
    • Working knowledge of SQL and other data sources
    • Bachelor’s degree in computer science or computer engineering or equivalent degree required
    • Design, develop, and maintain scalable data pipelines and ETL processes on AWS cloud platform using services such as AWS Glue, AWS Lambda, and AWS EMR.
    • Implement data ingestion, transformation, and integration workflows to acquire, process, and store large volumes of structured and unstructured data from various sources.
    • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical specifications.
    • Optimize data storage and retrieval mechanisms for performance, reliability, and cost efficiency using AWS services like S3, Redshift, and DynamoDB.
    • Ensure data quality and integrity by implementing data validation, cleansing, and enrichment techniques.
    • Monitor and troubleshoot data pipelines, identify and resolve performance bottlenecks and data processing issues.
    • Implement security controls and best practices to protect data assets and ensure compliance with industry standards and regulations.
    • Perform data modeling, schema design, and data governance activities to support analytical and reporting needs.
    • Stay updated with the latest cloud technologies, industry trends, and best practices in data engineering and big data processing.
    • Collaborate with cross-functional teams to define and implement data engineering standards, processes, and tools.

    Preferred Background:

    • Bachelor’s degree in Computer Science, Engineering, or a related field. Master’s degree is a plus.
    • 3 to 6 years of experience working as a Data Engineer, preferably with a focus on cloud-based data processing and analytics.
    • Strong hands-on experience with AWS cloud services such as AWS Glue, AWS Lambda, S3, Redshift, EMR, and DynamoDB.
    • Proficiency in programming languages like Python, Java, or Scala for data processing and ETL tasks.
    • Experience with big data technologies such as Apache Hadoop, Apache Spark, or Apache Kafka.
    • Solid understanding of data modeling concepts, SQL, and database systems.
    • Knowledge of data governance, data security, and privacy principles.
    • Familiarity with DevOps practices and tools for continuous integration and deployment.
    • Strong problem-solving and troubleshooting skills, with the ability to analyze complex data-related issues.
    • Excellent communication skills and ability to work effectively in a collaborative team environment.