GitHub features |, the Getting started guide to big data, BigData-Notes

GitHub features |, the Getting started guide to big data, BigData-Notes

2022-08-30 0 421
Resource Number 36614 Last Updated 2025-02-24
¥ 0USD Upgrade VIP
Download Now Matters needing attention
Can't download? Please contact customer service to submit a link error!
Value-added Service: Installation Guide Environment Configuration Secondary Development Template Modification Source Code Installation

This issue is about the content of big data, want to learn big data students welfare is coming!

GitHub features |, the Getting started guide to big data, BigData-Notes插图

Big data processing process

GitHub features |, the Getting started guide to big data, BigData-Notes插图1

Learning framework

Log collection frameworks: Flume, Logstash, and Filebeat

Distributed file storage system: Hadoop HDFS

Database systems: Mongodb and HBase

Distributed computing framework:

  • Batch processing framework:Hadoop MapReduce
  • Stream processing framework:Storm
  • Hybrid processing framework:Spark、Flink

Query analysis framework:Hive 、Spark SQL 、Flink SQL、 Pig、Phoenix

Cluster resource Manager:Hadoop YARN

Distributed coordination service:Zookeeper

Data migration tool:Sqoop

Task scheduling framework:Azkaban、Oozie

Cluster deployment and monitoring:Ambari、Cloudera Manager

data collection:

The first step in big data processing is the collection of data. Nowadays, medium and large projects usually adopt microservice architecture for distributed deployment, so data collection needs to be carried out on multiple servers, and the collection process should not affect normal business development. Based on this requirement, a variety of log collection tools have been derived, such as Flume, Logstash, Kibana, etc., which can complete complex data collection and data aggregation through simple configuration.

data storage

Once the data is collected, the next question is: How should the data be stored? The most familiar are traditional relational databases such as MySQL and Oracle, which have the advantage of being able to store structured data quickly and support random access. However, the data structure of big data is usually semi-structured (such as log data) or even unstructured (such as video and audio data). In order to solve the storage of massive semi-structured and unstructured data, Hadoop HDFS, KFS, GFS and other distributed file systems have been derived. They are all capable of supporting the storage of structured, semi-structured and unstructured data and can be scaled out by adding machines.

Distributed file system perfectly solves the problem of mass data storage, but a good data storage system needs to consider both data storage and access problems, for example, you want to be able to randomly access the data, which is the traditional relational database is good at, but not the distributed file system is good at. Then is there a storage scheme that can combine the advantages of distributed file system and relational database at the same time? Based on this demand, HBase and MongoDB are generated.

data analysis

The most important part of big data processing is data analysis, which is usually divided into two types: batch processing and stream processing.

  • Batch processing: Unified processing of massive offline data in a period of time. The corresponding processing frameworks include Hadoop MapReduce, Spark, Flink, etc.
  • Stream processing: processing the data in motion, that is, processing it at the same time as receiving the data, the corresponding processing frameworks include Storm, Spark Streaming, Flink Streaming, etc.

Batch processing and stream processing each have their own applicable scenarios. Because time is not sensitive or hardware resources are limited, batch processing can be used. Time sensitivity and high timeliness requirements can be used stream processing. As the price of server hardware gets lower and the demand for timeliness gets higher and higher, stream processing is becoming more and more common, such as stock price forecasting and e-commerce operation data analysis.

The above framework requires data analysis through programming, so if you are not a background engineer, is it not able to carry out data analysis? Of course not, big data is a very complete ecosystem, there is a demand for solutions. In order to enable people familiar with SQL to analyze data, query analysis frameworks emerged, commonly used Hive, Spark SQL, Flink SQL, Pig, Phoenix and so on. These frameworks allow for flexible query analysis of data using standard SQL or SQL-like syntax. These SQL files are parsed and optimized and converted into job programs. For example, Hive converts SQL to MapReduce jobs. Spark SQL converts SQL to a series of RDDs and transformation relationships. Phoenix converts SQL queries into one or more HBase scans.

data application

After the data analysis is complete, the next step is the scope of the data application, depending on your actual business needs. For example, you can visualize the data, or use the data to optimize your recommendation algorithm, which is now widely used, such as short video personalized recommendation, e-commerce product recommendation, and headline news recommendation. Of course, you can also use the data to train your machine learning model, which is the domain of other fields, with corresponding frameworks and technology stacks to deal with, so I won’t go into the details here.

GitHub features |, the Getting started guide to big data, BigData-Notes插图2

Picture reference:
https://www.edureka.co/blog/hadoop-ecosystem

GITHUB site:click to download

 

资源下载此资源为免费资源立即下载
Telegram:@John_Software

Disclaimer: This article is published by a third party and represents the views of the author only and has nothing to do with this website. This site does not make any guarantee or commitment to the authenticity, completeness and timeliness of this article and all or part of its content, please readers for reference only, and please verify the relevant content. The publication or republication of articles by this website for the purpose of conveying more information does not mean that it endorses its views or confirms its description, nor does it mean that this website is responsible for its authenticity.

Ictcoder Free source code GitHub features |, the Getting started guide to big data, BigData-Notes https://ictcoder.com/kyym/github-features-the-getting-started-guide-to-big-data-bigdata-notes.html

Share free open-source source code

Q&A
  • 1, automatic: after taking the photo, click the (download) link to download; 2. Manual: After taking the photo, contact the seller to issue it or contact the official to find the developer to ship.
View details
  • 1, the default transaction cycle of the source code: manual delivery of goods for 1-3 days, and the user payment amount will enter the platform guarantee until the completion of the transaction or 3-7 days can be issued, in case of disputes indefinitely extend the collection amount until the dispute is resolved or refunded!
View details
  • 1. Heptalon will permanently archive the process of trading between the two parties and the snapshots of the traded goods to ensure that the transaction is true, effective and safe! 2, Seven PAWS can not guarantee such as "permanent package update", "permanent technical support" and other similar transactions after the merchant commitment, please identify the buyer; 3, in the source code at the same time there is a website demonstration and picture demonstration, and the site is inconsistent with the diagram, the default according to the diagram as the dispute evaluation basis (except for special statements or agreement); 4, in the absence of "no legitimate basis for refund", the commodity written "once sold, no support for refund" and other similar statements, shall be deemed invalid; 5, before the shooting, the transaction content agreed by the two parties on QQ can also be the basis for dispute judgment (agreement and description of the conflict, the agreement shall prevail); 6, because the chat record can be used as the basis for dispute judgment, so when the two sides contact, only communicate with the other party on the QQ and mobile phone number left on the systemhere, in case the other party does not recognize self-commitment. 7, although the probability of disputes is very small, but be sure to retain such important information as chat records, mobile phone messages, etc., in case of disputes, it is convenient for seven PAWS to intervene in rapid processing.
View details
  • 1. As a third-party intermediary platform, Qichou protects the security of the transaction and the rights and interests of both buyers and sellers according to the transaction contract (commodity description, content agreed before the transaction); 2, non-platform online trading projects, any consequences have nothing to do with mutual site; No matter the seller for any reason to require offline transactions, please contact the management report.
View details

Related Article

make a comment
No comments available at the moment
Official customer service team

To solve your worries - 24 hours online professional service