Find Jobs
Hire Freelancers

Reading and Writing Parquet file with nested datatype using Pyspark

$2-8 USD / hour

Concluído
Publicado há quase 3 anos

$2-8 USD / hour

Please find the images attached Read the parquet file line by line , column by column, each and every column value will be passed to another function will return some value, with that new value the string has to be replaced in the current column value and and write the records ( with changed values) to new parquet file......while writing we have to make sure that order of the records, schema structure everything should be same ( apart from changed values) For ex: in the Sample [login to view URL] ,we see [login to view URL] for all old names James, Michael,Robert , Washington... for old_name --> James , create a function by name transformer() and if we pass [login to view URL] ---> brown should replace with black, for old_name --> Michael if we pass [login to view URL] ---> null should replace with black the changes should be appear in the new new parquet file by name [login to view URL] with same schema structure , order of columns,order of records Note:- sample data is just for input data, logic should be dynamic , parquet file schema will not be the same all the time.....our code should read the parquet file schema dynamically and and create the parquet file with changed data ( xxx) ....the rows, schema and columns should be same Code Snippet for sample data dataDictionary = [ ('James',{'hair':'black','eye':'brown'}, ("James","","Smith")), ('Michael',{'hair':'brown','eye': None}, ("Michael","Rose","")), ('Robert',{'hair':'red','eye':'black'}, ("Robert","","Williams")), ('Washington',{'hair':'grey','eye':'grey'}, ("Maria","Anne","Jones")) ] schema = StructType([ StructField('old_name', StringType(), True), StructField('properties', MapType(StringType(),StringType()),True), StructField('name', StructType([ StructField('firstname', StringType(), True), StructField('middlename', StringType(), True), StructField('lastname', StringType(), True) ])) ]) Sample data screen shot has the sample data Sample schema screen shot has the schema details
ID do Projeto: 31183672

Sobre o projeto

2 propostas
Projeto remoto
Ativo há 3 anos

Quer ganhar algum dinheiro?

Benefícios de ofertar no Freelancer

Defina seu orçamento e seu prazo
Seja pago pelo seu trabalho
Descreva sua proposta
É grátis para se inscrever e fazer ofertas em trabalhos
Concedido a:
Avatar do Usuário
Hello, When viewing you job details, it really hooked me because 've so much experience in this area. With solid experience in data analysis and Microsft certifications in Data managment and analysis, Sql Server and business intelligence, python programming, pyspark, airflow and AWS Services i could be valuable for your project. let's have 10 mn to discuss more details and get started right away Best Regards Hosni Mrizek
$7 USD em 20 dias
0,0 (0 avaliações)
0,0
0,0
2 freelancers estão ofertando em média $8 USD/hora for esse trabalho
Avatar do Usuário
Hi, I am an experienced Data Engineer with a solid background in Spark. I have worked on many projects with Spark, Scala, Python, Cassandra, Snowflake, AWS,... Let's have a call for more details about the project. Regards
$8 USD em 25 dias
5,0 (1 avaliação)
1,8
1,8

Sobre o cliente

Bandeira do(a) UNITED STATES
Mountain House, United States
5,0
2
Método de pagamento verificado
Membro desde fev. 22, 2021

Verificação do Cliente

Obrigado! Te enviamos um link por e-mail para que você possa reivindicar seu crédito gratuito.
Algo deu errado ao enviar seu e-mail. Por favor, tente novamente.
Usuários Registrados Total de Trabalhos Publicados
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
Carregando pré-visualização
Permissão concedida para Geolocalização.
Sua sessão expirou e você foi desconectado. Por favor, faça login novamente.