Logotipo do Freelancer Como Funciona Busque Trabalhos Fazer Login Cadastrar Publicar um Projeto
EXPLORE
Big Data Sales Hadoop Java Machine Learning (ML) Python
Profile cover photoundefined
Agora você está seguindo .
Erro seguindo usuário.
Esse usuários não permite que o sigam.
Você já está seguindo esse usuário.
Seu plano permite apenas 0 seguidas. Aprimore-o aqui.
Deixou de seguir o usuário com sucesso.
Ocorreu um erro ao deixar de seguir o usuário.
Você recomendou com sucesso
Ocorreu um erro ao recomendar o usuário.
Algo deu errado. Por favor, atualize a página e tente novamente.
E-mail verificado com sucesso
Avatar do Usuário
$35 USD / hora
Bandeira do(a) INDIA
saharanpur, india
$35 USD / hora
No momento são 3:21 PM aqui
Entrou no Freelancer em novembro 4, 2012
3 Recomendações

Mohd T.

@tausy

4,9 (110 avaliações)
6,4
6,4
$35 USD / hora
Bandeira do(a) INDIA
saharanpur, india
$35 USD / hora
96%
Trabalhos Concluídos
83%
Dentro do Orçamento
95%
No Prazo
17%
Taxa de Recontratação

ML, AI, Data Science, Python, Hadoop, Databases

- Data Scientist with over 7 years of industry experience, knowledge and understanding of Machine Learning, Data Analysis, Big Data/Hadoop, ETL, and Databases. - I hold a Master's degree in Data Science from Trinity College Dublin and a Bachelor's degree in Computer Science. - Currently, working as a data scientist with one of the world's largest banking and financial firm. - Solid understanding and expertise in analyzing and maintaining large datasets. - Honed my skills in Data Ingestion, Data Analysis, Data Migration, Data Consolidation, Data Processing, Data Visualization, and Data Mining. - In my 7 years of career, I worked primarily on Predictive Modeling, Machine Learning, and Hadoop to deliver cutting-edge predictive models in Healthcare, Aviation, and Financial sectors. - Extensive experience in building machine learning applications using Python and its ML stack libraries including NumPy, Pandas, Scikit-Learn, Matplotlib, etc. - Development and implementation experience of building data analytics pipelines and ML systems using PySpark on big data. - Worked extensively on Big Data and Hadoop stack tools including but not limited to Sqoop, Flume, Oozie, Hive, Impala, HDFS, and Map Reduce. - Worked on numerous projects of SQL, PL/SQL, ETL, Informatica, SSIS, and Informatica DIH for years. - Proficient in Java, and Python programming languages. Also, work on R statistical language. - Current areas of interest include Data Science, Data Analytics, Machine Learning, Predictive Modeling, Knowledge Discovery from Databases(KDD), Data Mining, Web Mining, and Information Retrieval.
Freelancer Python Developers India

Contate Mohd T. para falar sobre o seu trabalho

Entre para falar sobre qualquer detalhe pelo chat

Itens de Portfólio

Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Design one binary classifier to categorize the attack and the normal traffic data. The attack and the normal traffic data were created using the raw network packets of the UNSW-NB15 dataset, which was created by the IXIA PerfectStorm tool in the Cyber Range Lab of the Australian Centre for Cyber Security (ACCS) to generate a hybrid of real modern normal activities and synthetic contemporary attack behaviours. The Tcpdump tool is used to capture 100 GB of raw traffic (e.g., Pcap files). This data set has nine types of attacks, namely, Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnaissance, Shellcode, and Worms. The Argus and Bro-IDS tools are used, and twelve algorithms are developed to generate a total of 49 features with the class label.
Classify Attacks and Normal Traffic Data Using PySpark
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Authorship attribution attempts to anticipate the author of a piece of text authored by an unknown author from a list of known authors. The performance of various supervised machine learning methods to predict the authors of unknown texts was examined in this project. We compared the performances of Logistic Regression, Linear Support Vector Machines, Naive Bayes, and Random Forest algorithms on the authorship attribution task using 5-fold cross validation. After that, we chose the best algorithm and attempted to predict.

A publicly available dataset from the UCI Machine Learning Repository is used to conduct the study of author identification.
dataset: https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution

Out of the four models, Linear SVC outperformed the other three models and achieved a mean accuracy of 0.96 over 5-folds. The model performed well on the unknown authored texts and predicted the authors with a 99% accuracy for each of the authors.
Authorship Attribution Using Machine Learning
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
Stock markets are affected by many factors, causing uncertainty and high volatility in the market. Time-series prediction is a common technique widely used in many real-world applications, such as weather forecasting and financial market prediction. It uses the continuous data in a period of time to predict the result in the next time unit. This project captures the market sentiment from the financial news, which is then used to predict the stock price using time-series prediction.

The objectives of this project can be summarised in the below points:

1. Analyse the sentiment of financial news.
2. Use the generated sentiment along with other important features to predict the price of a stock.
3. Use Spark over Hadoop and the PoC can be scaled up without any trouble.

The linear regressor model worked very well and predicted the stock prices of Apple Inc. on the training set with a Root Mean Square Error of 0.0088. The RMSE value on the test set was found as 0.2010.
Stock Price Prediction Using News Sentiment
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
A deep CNN model is created to detect the four shapes, i.e., star, circle, square, and triangle, from images. Optimized the model architecture and improved its efficiency in terms of accuracy. 

dataset: https://www.kaggle.com/datasets/smeschke/four-shapes

We used a convolutional neural network (CNN) to train on the four shapes dataset. Our model uses convolutional layers, pooling layers, dropout layers, normalization layers, and dense layers with relu activation units to classify the shapes.

The training accuracy of the network was 100 percent after ten epochs of training, whereas the validation accuracy was 98.5 percent.
FOUR SHAPES CLASSIFICATION USING DEEP LEARNING
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
This project utilised the power of Deep Neural Networks in order to classify the images of animals. Our approach uses transfer learning, where pre-trained CNN architectures are used and further trained on the specific animal images in order to predict the animal from a given image. 

The VGG-19 architecture was used as a base model, which was then further trained on the animal images to detect and classify the animals. We used several data augmentation techniques like scale, shear, flip, etc. to develop the model and trained it for 200 epochs. Our model accurately predicted the animals with over 91% accuracy for the training set and 78% accuracy for the validation set. Our testing accuracy turned out to be 84% approximately.
Animals Image Classification Using Deep/Transfer Learning
The goal of this project is to create a system that can detect whether or not someone is wearing a mask. CCTV cameras are used to record images or real-time video footage. Facial features are retrieved from the images or video footage and utilized to identify the mask on the face. We are attempting to detect the face mask using the capabilities of convolutional neural networks in this application. We're also trying to count the number of people who wear a proper face-covering against those who don't.
Mask Detection/Real-time Human Counting with Deep Learning
The goal of this project is to create a system that can detect whether or not someone is wearing a mask. CCTV cameras are used to record images or real-time video footage. Facial features are retrieved from the images or video footage and utilized to identify the mask on the face. We are attempting to detect the face mask using the capabilities of convolutional neural networks in this application. We're also trying to count the number of people who wear a proper face-covering against those who don't.
Mask Detection/Real-time Human Counting with Deep Learning
The goal of this project is to create a system that can detect whether or not someone is wearing a mask. CCTV cameras are used to record images or real-time video footage. Facial features are retrieved from the images or video footage and utilized to identify the mask on the face. We are attempting to detect the face mask using the capabilities of convolutional neural networks in this application. We're also trying to count the number of people who wear a proper face-covering against those who don't.
Mask Detection/Real-time Human Counting with Deep Learning

Avaliações

Mudanças salvas
Mostrando 1 - 5 de 50+ avaliações
Filtrar avaliações por: 5,0
₹16.000,00 INR
He submitted the project within the provided time frame!
Python Machine Learning (ML)
S
Bandeira do(a) Sri D. @srideepthisd3
há 2 meses
5,0
$50,00 USD
Great and fast work; communication with him is quick and convenient. I am happy to work with him.
Python NLP
I
Bandeira do(a) N A. @inourah14
há 4 meses
4,8
₹20.000,00 INR
It was really nice to work with Mohd. T. I would love to recommend this coder for all your relevant projects. Looking forward to work with him on my future projects.
Python Data Processing Excel Microsoft Access MySQL
Avatar do Usuário
Bandeira do(a) Siraj M. @sirajmultani
há 8 meses
5,0
$120,00 USD
The works are impeccable. Delivered on time and comply with everything requested. One of the best freelancer on this site.
Java Python Machine Learning (ML) Big Data Sales
+1 mais
Avatar do Usuário
Bandeira do(a) F. O. @fortizclavijo
há 9 meses
5,0
$350,00 SGD
Hired him to work on one of my projects. He was able to deliver the project proposal, project poster, artefact and report ahead of time. Guided me all the way when setting up the environment and running the program. Friendly approach made it easier to deal with him.
Python Software Architecture Report Writing Machine Learning (ML) Statistical Analysis
Avatar do Usuário
Bandeira do(a) Albin V. @albinvarghese
há 10 meses

Experiência

Data Scientist

Citibank Europe
dez. 2019 - Atual
Working as a data scientist in AI/ML team.

Hadoop/Machine Learning Developer

Opera Solutions
set. 2017 - Atual
Working on Hadoop Ecosystem in combination with python/machine learning to deliver predictive models.

Hadoop Developer

Tata Global Delivery Center SA, Montevideo, Uruguay, SA
abr. 2016 - ago. 2017 (1 ano, 4 meses)
Worked on Hadoop ecosystem to deliver cutting edge predictive models using Sqoop, Flume, Oozie, Hive, Map Reduce

Educação

MSc Data Science

Trinity College, Dublin, Ireland 2018 - 2019
(1 ano)

Bachelor Of Technology (Computer Engineering)

Jamia Millia Islamia, India 2009 - 2013
(4 anos)

Qualificações

Certificate in Healthcare

Tata Business Domain Academy
2014

Oracle Database Certified SQL Expert

Oracle University
2015
SQL proficiency test certificate provided by Oracle

Oracle Database Certified PL/SQL Expert

Oracle University
2015
Pl/SQL proficiency test certificate provided by Oracle

Contate Mohd T. para falar sobre o seu trabalho

Entre para falar sobre qualquer detalhe pelo chat

Verificações

Freelancer Preferencial
Identidade Verificada
Pagamento Verificado
Telefone Verificado
E-mail Verificado
Conectado ao Facebook

Certificações

preferredfreelancer-1.png Preferred Freelancer Program SLA 1 92% SQL_1.png SQL 1 90% java_1.png Java 1 87% SQL_2.png SQL 2 85% python-1.png Python 1 80%

Principais Habilidades

Python 76 Java 59 Big Data Sales 45 Hadoop 41 Machine Learning (ML) 17

Busque Freelancers Parecidos

Python Developers in India
Python Developers
Java Developers
Big Data Salespeople

Busque Mostruários Parecidos

Python
Java
Big Data Sales
Hadoop
Usuário Anterior
Usuário Seguinte
Convite enviado com sucesso!
Usuários Registrados Total de Trabalhos Publicados
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2023 Freelancer Technology Pty Limited (ACN 142 189 759)
Carregando pré-visualização
Permissão concedida para Geolocalização.
Sua sessão expirou e você foi desconectado. Por favor, faça login novamente.