Let Flink&Spark develop a simpler one-stop big data framework

Let Flink&Spark develop a simpler one-stop big data framework

2022-09-16 0 775
Resource Number 38569 Last Updated 2025-02-24
¥ 0USD Upgrade VIP
Download Now Matters needing attention
Can't download? Please contact customer service to submit a link error!
Value-added Service: Installation Guide Environment Configuration Secondary Development Template Modification Source Code Installation

This issue recommends StreamX, a one-stop big data framework that makes Flink&Spark development easier.

Let Flink&Spark develop a simpler one-stop big data framework插图

StreamX is a one-stop big data framework that makes Apache Spark and Apache Flink development easier. StreamX standardizes project configuration, encourages functional programming, defines the best programming methods, and provides a series of out of the box connectors that standardize the entire process of configuration, development, testing, deployment, monitoring, and operations. Developers only need to focus on the core business, greatly reducing learning costs and development barriers.

Main characteristics

  • Provide a range of out of the box connectors
  • Support project compilation function (Maven compilation)
  • Support Applicaion mode and Yarn Per Job mode startup
  • Quick daily operations (task start, stop, savepoint, restore from savepoint)
  • Support notebook (online task development)
  • Support task backup and rollback (configure rollback)
  • Provide two sets of APIs, Scala and Java

Function Architecture

Streamx consists of three parts, namely streamx core, streamx pump, and streamx console.

Let Flink&Spark develop a simpler one-stop big data framework插图1

streamx-core

Streamx core positioning is a development time framework that provides a series of out of the box connectors, extends DataStream related methods, and integrates DataStream and Flink SQL APIs.

streamx-pump

Streamx pump localization is a data extraction component, similar to Flinkx, developed based on various connectors provided in streamx core.

streamx-console

Streamx console is a comprehensive real-time data platform and low code platform that can effectively manage Flink tasks. It integrates project compilation, publishing, parameter configuration, startup, savepoint, flame graph, Flink SQL, monitoring, and many other functions into one, greatly simplifying the daily operation and maintenance of Flink tasks.

Installation and deployment

1 Environmental preparation

Streamx console provides an out of the box installation package, and there are some requirements for the environment before installation. The specific requirements are as follows:

5d4f9776ada24e078e9195072fa8bb5anoop.image_

At present, StreamX releases tasks for Flink and supports both Flink on YARN and Flink on Kubernetes modes.

  • Hadoop

To use Flink on YARN, the cluster that needs to be deployed needs to install and configure relevant environment variables for Hadoop. If you are installing a Hadoop environment based on CDH, the relevant environment variables can be configured as follows:

export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop #hadoop 安装目录
export HADOOP_CONF_DIR=/etc/hadoop/conf
export HIVE_HOME=$HADOOP_HOME/../hive
export HBASE_HOME=$HADOOP_HOME/../hbase
export HADOOP_HDFS_HOME=$HADOOP_HOME/../hadoop-hdfs
export HADOOP_MAPRED_HOME=$HADOOP_HOME/../hadoop-mapreduce
export HADOOP_YARN_HOME=$HADOOP_HOME/../hadoop-yarn
  • Kubernetes

To use Flink on Kubernetes, additional deployment/or use of existing Kubernetes clusters is required. Please refer to Flink K8s integration support:
http://www.streamxhub.com/docs/flink-k8s/k8s-dev

2 Compile Installation

You can choose to manually compile and install, or directly download the compiled installation package. The steps for manual compilation and installation are as follows:

git clone https://github.com/streamxhub/streamx.git
cd Streamx
mvn clean install -DskipTests -Denv=prod

If everything goes smoothly, you can see the compilation success message:

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Streamx 1.0.0:
[INFO]
[INFO] Streamx ............................................ SUCCESS [  1.882 s]
[INFO] Streamx : Common ................................... SUCCESS [ 15.700 s]
[INFO] Streamx : Flink Parent ............................. SUCCESS [  0.032 s]
[INFO] Streamx : Flink Common ............................. SUCCESS [  8.243 s]
[INFO] Streamx : Flink Core ............................... SUCCESS [ 17.332 s]
[INFO] Streamx : Flink Test ............................... SUCCESS [ 42.742 s]
[INFO] Streamx : Spark Parent ............................. SUCCESS [  0.018 s]
[INFO] Streamx : Spark Core ............................... SUCCESS [ 12.028 s]
[INFO] Streamx : Spark Test ............................... SUCCESS [  5.828 s]
[INFO] Streamx : Spark Cli ................................ SUCCESS [  0.016 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS

After installation, you will see the final project file located at
streamx/streamx-console/streamx-console-service/target/streamx-console-service-1.0.0-bin.tar.gz ,After unpacking, the installation directory is as follows:

.
streamx-console-service-1.0.0
├── bin
│    ├── flame-graph
│    ├──   └── *.py                                             //火焰图相关功能脚本 ( 内部使用,用户无需关注 )
│    ├── startup.sh                                             //启动脚本
│    ├── setclasspath.sh                                        //java 环境变量相关的脚本 ( 内部使用,用户无需关注 )
│    ├── shutdown.sh                                            //停止脚本
│    ├── yaml.sh                                                //内部使用解析 yaml 参数的脚本 ( 内部使用,用户无需关注 )
├── conf
│    ├── application.yaml                                       //项目的配置文件 ( 注意不要改动名称 )
│    ├── application-prod.yml                                   //项目的配置文件 ( 开发者部署需要改动的文件,注意不要改动名称 )
│    ├── flink-application.template                             //flink 配置模板 ( 内部使用,用户无需关注 )
│    ├── logback-spring.xml                                     //logback
│    └── ...
├── lib
│    └── *.jar                                                  //项目的 jar 包
├── plugins
│    ├── streamx-jvm-profiler-1.0.0.jar                         //jvm-profiler,火焰图相关功能 ( 内部使用,用户无需关注 )
│    └── streamx-flink-sqlclient-1.0.0.jar                      //Flink SQl 提交相关功能 ( 内部使用,用户无需关注 )
├── logs                                                        //程序 log 目录
├── temp                                                        //内部使用到的零时路径,不要删除

3 Modify database configuration

The installation and unpacking have been completed. Next, prepare the data related work:

  • Create a new database streamx in MySQL that the deployment machine can connect to
    Modify the connection information into conf, modify conf/application-prod.yml, find the ‘data source’ option, locate the MySQL configuration, and modify it to the corresponding information, as follows:
datasource:
    dynamic:
      # Should SQL log output be enabled? It is recommended to disable it in production environments as it may cause performance degradation
      p6spy: false
      hikari:
        connection-timeout: 30000
        max-lifetime: 1800000
        max-pool-size: 15
        min-idle: 5
        connection-test-query: select 1
        pool-name: HikariCP-DS-POOL
      # Configure default data source
      primary: primary
      datasource:
        # Data Source-1, named Primary
        primary:
          username: $user
          password: $password
          driver-class-name: com.mysql.cj.jdbc.Driver
          url: jdbc: mysql://$host:$port/streamx?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2B8

Tip: During the installation process, there is no need to manually initialize data. Simply set the database information, and a series of operations such as table creation and data initialization will be automatically completed

4 startup project

Enter bin and directly execute startup.sh to start the project. The default port is 10000:

cd streamx-console-service-1.0.0/bin
bash startup.sh

The relevant logs will be output to
streamx-console-service-1.0.0/logs/streamx.out 里。

Open the browser and enter http://$host: 10000 to log in. The login interface is as follows (default password: admin/streamx):

Let Flink&Spark develop a simpler one-stop big data framework插图3

5 Modify system configuration

After entering the system, the first thing to do is to modify the system configuration. Under the menu/StreamX/Setting, the operation interface is as follows:

Let Flink&Spark develop a simpler one-stop big data framework插图4

The main configuration items are divided into the following categories:

Flink Home

Configure the global Flink Home here, which is the only place where the system specifies the Flink environment and will apply to all jobs.

Maven Home

Specify Maven Home, currently not supported, to be implemented in the next version.

StreamX Env

StreamX Webapp address: This is where the web URL access address for StreamX Console is configured. The main flame map function will be used, and specific tasks will send HTTP requests to the system through the URL exposed here for collection and display.

StreamX Console Workspace: Configure the system’s workspace for storing project source code, compiled projects, etc.

Email

The configuration related to Alert Email is to configure the sender’s email information. Please refer to the relevant email information and documentation for specific configuration.

System screenshot

Let Flink&Spark develop a simpler one-stop big data framework插图5

d62482c5e0a6417a8058f38d33736ffenoop.image_

Let Flink&Spark develop a simpler one-stop big data framework插图7

You can read more content on your own.

资源下载此资源为免费资源立即下载
Telegram:@John_Software

Disclaimer: This article is published by a third party and represents the views of the author only and has nothing to do with this website. This site does not make any guarantee or commitment to the authenticity, completeness and timeliness of this article and all or part of its content, please readers for reference only, and please verify the relevant content. The publication or republication of articles by this website for the purpose of conveying more information does not mean that it endorses its views or confirms its description, nor does it mean that this website is responsible for its authenticity.

Ictcoder Free source code Let Flink&Spark develop a simpler one-stop big data framework https://ictcoder.com/kyym/let-flinkspark-develop-a-simpler-one-stop-big-data-framework.html

Share free open-source source code

Q&A
  • 1, automatic: after taking the photo, click the (download) link to download; 2. Manual: After taking the photo, contact the seller to issue it or contact the official to find the developer to ship.
View details
  • 1, the default transaction cycle of the source code: manual delivery of goods for 1-3 days, and the user payment amount will enter the platform guarantee until the completion of the transaction or 3-7 days can be issued, in case of disputes indefinitely extend the collection amount until the dispute is resolved or refunded!
View details
  • 1. Heptalon will permanently archive the process of trading between the two parties and the snapshots of the traded goods to ensure that the transaction is true, effective and safe! 2, Seven PAWS can not guarantee such as "permanent package update", "permanent technical support" and other similar transactions after the merchant commitment, please identify the buyer; 3, in the source code at the same time there is a website demonstration and picture demonstration, and the site is inconsistent with the diagram, the default according to the diagram as the dispute evaluation basis (except for special statements or agreement); 4, in the absence of "no legitimate basis for refund", the commodity written "once sold, no support for refund" and other similar statements, shall be deemed invalid; 5, before the shooting, the transaction content agreed by the two parties on QQ can also be the basis for dispute judgment (agreement and description of the conflict, the agreement shall prevail); 6, because the chat record can be used as the basis for dispute judgment, so when the two sides contact, only communicate with the other party on the QQ and mobile phone number left on the systemhere, in case the other party does not recognize self-commitment. 7, although the probability of disputes is very small, but be sure to retain such important information as chat records, mobile phone messages, etc., in case of disputes, it is convenient for seven PAWS to intervene in rapid processing.
View details
  • 1. As a third-party intermediary platform, Qichou protects the security of the transaction and the rights and interests of both buyers and sellers according to the transaction contract (commodity description, content agreed before the transaction); 2, non-platform online trading projects, any consequences have nothing to do with mutual site; No matter the seller for any reason to require offline transactions, please contact the management report.
View details

Related Article

make a comment
No comments available at the moment
Official customer service team

To solve your worries - 24 hours online professional service