HonorHer Jobs

HonorHer

Job Information

Amazon Senior Specialist Solutions Architect, GenAI, Media & Entertainment in Irvine, California

Description

Are you a customer-obsessed builder with a passion for helping customers achieve their full potential? Do you have the business savvy, Generative AI background, and sales skills necessary to help position AWS as the cloud provider of choice for customers? Do you love building new strategic and data-driven businesses? Join the Worldwide Specialist Organization (WWSO) Generative AI team as a GTM Specialist Solutions Architect!

The Worldwide Specialist Organization (WWSO) works backwards from our customer’s most complex and business critical problems to build and execute go-to-market plans that turn AWS ideas into multi-billion-dollar businesses. WWSO teams include business development, specialist and technical solutions architecture. As part of WWSO, you'll provide expertise across the entire life cycle of an AWS customer initiative, from developing ideas for new services to accelerating the adoption of established businesses. We pride ourselves on thinking big, delivering exceptional results for our customers, and working across AWS as #OneTeam.

In this role, you will help some of our largest Media, Entertainment, Games and Sports (MEGS) customers build, fine-tune and deploy GenAI models and applications using AWS GenAI services such as Amazon Bedrock, Amazon SageMaker, and Amazon Q. You also will influence how we Go to Market (GTM) for GenAI in the MEGS industry, and engage with AWS product owners to influence product direction. You will interact with MEGS customers directly to understand the business problem, help and aid them in implementation of GenAI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths for GenAI. You will work closely with other Solution Architects from across AWS to enable large-scale MEGS use cases and drive the adoption of AWS GenAI services in the MEGS industry.

AWS is looking for a GenAI Senior Solutions Architect who will be the Subject Matter Expert (SME) for helping MEGS customers to design solutions that leverage our GenAI services specifically for MEGS industry use cases. You will interact with MEGS customers directly to understand the business problem, help and aid them in implementation of GenAI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths for GenAI. You will work closely with other Solution Architects from across AWS to enable large-scale MEGS use cases and drive the adoption of AWS GenAI services in the MEGS industry. You will develop demos, white papers, blogs, reference implementations, and presentations to enable customers and partners to fully leverage AWS GenAI services for MEGS use cases. You also will create field enablement materials for the broader technical field population, to help them understand how to integrate AWS GenAI solutions into MEGS customers' architectures.

Key job responsibilities

• Represent the voice of the customer; collaborate with field and central teams to bring customer feedback to product teams. Lead curation of custom feature and availability requests for unique customer use cases.

• Provide advanced technical knowledge to your aligned GTM teams to unblock our customers’ largest and most critical business challenges.

• Along with your extended team, own the technical bar for specialist technical artifacts and standards.

• Collaborate with your GTM colleagues to provide technical insights into GTM strategy and support field marketing to execute local technical events, campaigns, and customer engagements.

• Act as a thought leader sharing best practices through forums such as AWS blogs, whitepapers, reference architectures and public-speaking events such as AWS Summit, AWS re: Invent, etc.

• Guide and Support an AWS internal community of technical subject matter experts aligned to your customers. Create field enablement materials for the broader SA population to help them understand how to integrate new AWS solutions into customer architectures.

We are open to hiring candidates to work out of one of the following locations:

Irvine, CA, USA | San Francisco, CA, USA | Santa Clara, CA, USA | Santa Monica, CA, USA | Seattle, WA, USA

Basic Qualifications

  • 10+ years design/implementation/consulting experience of distributed applications

  • 7+ years management of technical, customer facing resources

Preferred Qualifications

  • Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, engineering, or computer science

  • Experience with optimizing ML workloads using model compression, distillation, pruning, sparsification, quantization.

  • Experience with Transformer-based models and related issues such as FlashAttention, PagedAttention, Speculative decoding, and hardware-informed efficient model architectures.

  • Experience with distributed training, inference optimization, and optimizing performance versus costs.

  • Experience with open source frameworks for building LLM applications such as LangChain, LlamaIndex.

  • Experience with designing, developing, and optimizing high-quality prompts and templates that guide the behavior and responses of LLMs.

  • Experience with design, deployment, and evaluation of LLM-powered agents and tools and orchestration approaches.

  • Customer-facing skills to represent AWS well within the customer’s environment and drive discussions with senior personnel regarding trade-offs, best practices, and risk mitigation. Should be able to interact with Chief Data Science Officers, as well as CxO-level business stakeholders within their organizations.

  • Experience with AWS services such as Amazon SageMaker, Step Functions, OpenSearch, PgVector, S3, IAM, Cognito, EC2, Glue, & EMR.

  • Demonstrated ability to think strategically about business, product, and technical challenges in an enterprise environment.

  • Track record of thought leadership and innovation around Machine Learning.

  • Led a cloud initiative as an AWS customer, or otherwise consulted with an AWS customer about their cloud transformation.

  • Experience with performance benchmarking and developing prescriptive guidance on optimally building, deploying and monitoring ML models on AWS, with a focus on driving actions at scale to provide low prices and increased selection for customers.

Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.

Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.

Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $122,900/year in our lowest geographic market up to $239,000/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.

DirectEmployers