·

Spark and Big Data Application

Published at 2024-07-23 10:57:13Viewed 391 times
Academic article
·
Introduction
Please reprint with source link
Writing categories

Spark and Big Data Application

Digital data is growing fast in volume courtesy of the innovations that are taking place in the digital world. The data quantities are huge and are from different sources, including smartphones and social networks, which the relational databases and the analytical techniques are incapable of storing or processing. These huge amounts of data get the name Big Data from this characteristic. Traditional machine learning cannot handle the volume and velocity of the data and it therefore requires the right system and programming model to complete the analysis. Big data comes in different forms, and their distribution and analysis are possible through data frameworks like Hadoop and Spark. Large data analytics can also benefit from the use of different types of Artificial Intelligence (AI) like Machine Learning for results. The use of big data tools and AI techniques has changed the phase of data analysis. This paper will examine a big data computing model based on Spark and the application of big data. Additionally, it will highlight big data platforms.

Big Data Platforms

Several platforms form big data, including batch processing, real-time processing and interactive analytics. In batch processing, there are extensive computations that also require time for data processing. The most common form of batch processing is Apache Hadoop. The platform is scalable, cost-effective, flexible and tolerates faults involved in the processing of big data. There are also other Hadoop Platforms like the Hadoop Distributed File System (HDFS), Yet Another Resource Negotiator (YARN) and MapReduce (Rahmani et al., 2021). These platforms can also operate across the big data value chain, which works from aggregating, storing, processing and managing big data. When it comes to stream processing platforms, Apache Spark and Storm are the most popular platforms. Such applications used in stream processing play a big part in weather and transport systems. It is possible to access datasets and use them for various operations remotely. The users can also access the system directly and interact with the data. An example of an interactive analytic platform is the Apache Drill.

Apache Spark Big Data Computing

It is a text-based framework whose origin traces back to the University of Berkeley back in 2009 and has since become a powerful tool capable of macro processing, unlike other tools. The core is the basis of the features that are found in Spark. Spark has three application programming interfaces (APIs), namely Java, Python and Scala, and its core is an API location that acts as a definition for the RDD dataset found in its original programming. Apache Spark supports the building and running of fast and high-level sophisticated applications on Hadoop. Spark engine is responsible for supporting tools that can support SQL, for instance, queries, stream data applications, and manage complex analytics like machine learning and graph algorithms. There are several big data ecosystems based on the Spark platform, including Spark runtime, spark SQL, graph X, Spark streaming and even the MLlib-Algorithm library (Chen, 2021).

The analysis of spark runtime requires the determination of functionality, for instance, the scheduling of tasks and the management of memory. In instances that involve data transmission inside Spark using RDD structure, the idea is for the determination of the core logic data information of Spark. The first will involve the division of data into many subsets and transmitting the subsets to nodes in the cluster to help in processing. The second steps lead to result protection after the calculation. The protection it is possible to have the same file content as the results of the calculation and it also becomes easier to back up the file content. The final step happens only in the case where calculation errors occur at any of the subsets, which will involve the resorting of the subset to do away with the errors.

Shark is where Spark SQL is developed. Shark employs the concepts from HQL parsing, logical execution plan translation, and execution plan optimization in hive to achieve hive compatibility. One could argue that the spark job merely replaces the physical execution plan. Additionally, it is dependent on the past Metastore and Hive series. When it comes to historical compatibility, Spark SQL solely uses the HQL parser, history meta store, and Hive server. Stated differently, spark SQL takes over since HQL is parsed into an abstract syntax tree (AST). The catalyst is in charge of creating and optimizing the execution plan. It is considerably easier to build in Scala thanks to its functional language capabilities, such as pattern matching.

Among the many benefits of Spark, SQL integration is the most noteworthy. Spark SPL seamlessly combines spark programs and SQL queries with spark SQL. Users may query structured data in the spark program using conventional data frame APIs or SQL. It is compatible with R, python scala and Java by employing knowledgeable API to work with the data. This benefit can increase the effectiveness of our research. Running SQL or hiveql queries on pre-existing warehouses with Hive integration is the second benefit. Unified data access offers a third benefit since it connects to any data source in an identical manner. Access to a range of data sources such as hive Avro parquet ORC JSON and JDBC can be facilitated by using data frames and SQL even adding data from different sources is possible. The final advantage comes in the form of unified data access, which allows for standard data connections via JDBC or ODBC connections. Server mode supports industry-standard JDBC and ODBC connections for business intelligence products.

When the calculating algorithm is analyzed using the Mllib algorithm library, it is shown to have a high level of complexity. Iterative computations need a lot of CPU power because all calculations must be stored on disk in order to wait for task processing to begin.

A portion of the work can be completed entirely in memory using the Spark platform. Because the iterative calculation is becoming more efficient, the corresponding iteration portion of the calculation task is moved straight to memory. It can also achieve disk and network operation under specific conditions. In conclusion, Spark offers even more impressive benefits for iterative computing, which can be used as a platform for distributed machine learning. Using a collaborative filtering algorithm is meant to figure out the concept before sharing the idea with the users.

There are steps necessary in the algorithm application, and the description is as follows. System filtering comes first. After choosing the people who share similar interests, the objects are chosen and categorized based on their preferences, creating a new set or sequence. Users in this process can be thought of as neighbors, and it is best to arrange and make use of related users in order to ascertain the best execution strategy. Secondly, cooperative filtering. The integration of user preferences is the essential component of the final suggestion, which is completed by means of the phases of user preferences gathering, similarity analysis, and recommendation based on the computation results. To accurately gather data behavior, the first consideration is choosing a user system and then organizing it based on user behavior and subsequently processing the data before issuing recommendations on what users may like provided their preferences.

The spark system is responsible for expanding spark data and also divides Spark streaming data depending on the time mode and eventually forms RDD. When processing streaming data that has small intervals, there will likely be a processing delay which leads to the time processing system. The advantage of using spark streaming includes its response to a fault, which, as it is, has a strong fault tolerance that makes it capable of handling errors and recovering data. Another benefit of using it is its ability to connect to relevant spark modules. One of its added advantages, apart from completing the flow of data, is the ability to deal with complexity.

Another key component of Spark is Graph X, whose foundation is on the large -scale graph computing with the ability to compute large graph data processing in Spark with the aid of Graph X properties. In the event of analysis of graph x, it is verifiable that it could be a source of rich data operators like the core and optimization operators. Graph X also has the capability of meeting graph operations of several distributed clusters and has enough API interfaces. According to Chen (2021), Graph X plays a critical role in Spark as it improves its data absorption and scale.

Applications of Big Data

Medical Application

Healthcare facilities and medical organizations use big data to anticipate potential health risks and prevent them. The healthcare system acquires large benefits from large-scale data analysis. For example, analytics has made it possible to diagnose diseases in their early stages, like cases of breast cancer. It has also supported the processing of medical images and medical signals for providing high-quality diagnosis and monitoring of patient symptoms. Additionally, tracking chronic diseases such as diabetes has been easier. Other uses are preventing the incidence of contagious diseases, education through social networks, genetic data analysis and personalized medicine. Data types include biomedical data, web data, data from different electronic health records (EHRs), hospital information systems (HISs), and omics data, which includes genomics, transcriptomics, proteomics, and pharmacogenomics. Rich data, such as demographics, test results, diagnoses, and personal information, are included in the EHR and HIS data (Ismail Ebada et al., 2022). Consequently, the significance of health data analysis has been taken into consideration, which has led to knowledge for scholars to control, manage and process big data courtesy of tools like Apache Spark.

Apache Spark's memory calculations enable rapid processing of large healthcare datasets, both unstructured and structured. It can analyze data faster than typical map reduction approaches, approximately 100 times faster. Spark's lambda architecture enables both real-time and batch processing. The tool analyzes streaming healthcare data and includes a library for handling Twitter data streams. Several messages relate to health, and communication is possible through the use of social networks. Twitter, for instance, provides real-time public health statistics that are less expensive to get since real-time disease surveillance entirely depends on mining the information from Twitter API methods (Nair et al., 2018). Spark's machine learning package, MLlib, can help create decision trees for illness prediction. Spark analyzes large amounts of data as it streams. It outperforms Hadoop in streaming due to its memory-based processing capabilities. Data and intermediate outcomes are kept in memory, which helps avoid delays that come in the input-output stage of data that has to move back and forth on the hard disk.

Spark uses resilient distributed datasets (RDDs), which are immutable object collections stored in memory and partitioned among nodes for parallel calculations. Spark is a batch-processing engine that can handle streaming data from several sources, including Twitter. The Spark engine handles small batches of the incoming data streams after division. Discretized streams (DStreams) are a series of RDDs that symbolize a continuous data stream in Spark streaming. The Spark engine computes the fundamental RDD transformations that result from operations on DStreams. Popular learning methods that aid in solving machine learning problems include collaborative filtering, clustering, and classification found in MLlib.

Natural Disaster Prediction and Management

The use of big data is a crucial part of responding to natural disasters. Some of the applications of the tools are in preventing disasters, providing relief in times of disaster and managing disasters in times of emergency. In the era of the Internet of Things (IoT), the complexity of natural disaster management has become simpler. AI Spark Big Model has enabled a new way of thinking, technical system and innovative ability to deal with complex disaster information, resulting in an effective way of dealing with comprehensive disaster prevention and reduction. Big data has strategic significance and, together with its core values, is mainly composed of three concepts. The first is on the level of strategic thinking which defines the point when the disaster problem is beyond the ability of contemporary scientific understanding. Such problems are high-frequency and high-intensity disasters and go beyond the ones that the world defines as basic disasters.

Additionally, comprehensive disaster reduction goes beyond the national strategy. The level of innovation and development of information science and technology in big data refers to the second concept that defines a situation where disaster big data results in challenges in traditional information science and technology. At this point, there is an acceleration in the formation of technology systems and new theories, which have complex information serving as the core. The third concept defines the level of social innovation and development. Here, disaster big data is the core capability of ensuring that the world has Internet and is smartly secure and can reduce disaster using these smart strategies.

The ability to efficiently monitor disaster information, standardize data, predict abnormalities and provide early warnings of signs of disaster, the existing forms assessing disaster risks and all kinds of disasters form disaster cloud computing (Chen, 2021). Since it deals with early warning, analyzes how to manage emergencies, provides emergency information services and analyses the calculation of other models, it is capable of building a large distributed ring in the way of shared infrastructure. It serves the major purpose of providing safe and faster disaster cloud data processing and predicts and issues an early warning. It also provides a disaster risk assessment model, emergency management service model and other related services.

Face Recognition

Face recognition is another application of big data analytics tools like Apache Spark. Businesses are able to use big data analysis systems to detect faces, recognize the faces, recognize gender, age, facial expressions and gazes of people watching the machines advertising and other digital signs that originate from face recognition technology. The information that big data processing technology collects becomes an instrumental part of the analysis of customer preferences. In the end, it helps provide customer-friendly services according to their gender, age and emotional state. It generally facilitates the connection between people and things, hence capable of making recommendations on the advertisements and messages that certain individuals prefer. It is also useful in the non-manual and automatic uses aimed at recognition of face, eyes and any other information that could help in predicting subsequent operations.

Conclusion

The paper addresses various issues, starting with the definition of big data and analyzing the types of platforms that form big data. It also summarizes the applications in Apache Spark by defining Spark SQL, graph x, Mllib algorithm library and Spark streaming and how they operate. It, however, still needs to be determined which platforms are stable, and it is therefore advisable to use the tools that have undergone tests and proven their scalability, flexibility and stability. Additionally, the essay introduces the reader to the areas where big data play crucial roles. The areas include its use in the medical field in the prediction, analysis, prevention and management of various forms of diseases. Other areas include its use in the management and prediction of natural disasters and its use in face recognition, among other areas. Technology is still evolving, and big data modelling could still use some improvements like having an enhanced real-time data processing which other tools like Apache Spark have already made possible courtesy of Twitter APIs. Most industries are also moving toward using big data processing systems that are accurate and precise to enhance their economic and social benefits. Batch processing, which is the most common big data processing, needs to meet the frequency of data processing requirements that these industries need.

References

  1. Chen, S. (2021). Research on Big Data Computing Model based on Spark and Big Data Applications. Journal of Physics: Conference Series, 2082(1), 012017. https://doi.org/10.1088/1742-6596/2082/1/012017
  2. Ismail Ebada, A., Elhenawy, I., Jeong, C.-W., Nam, Y., Elbakry, H., & Abdelrazek, S. (2022). Applying Apache Spark on Streaming Big Data for Health Status Prediction. Computers, Materials & Continua, 70(2), 3511–3527. https://doi.org/10.32604/cmc.2022.019458
  3. Nair, L. R., Shetty, S. D., & Shetty, S. D. (2018). Applying Spark based machine learning model on streaming big data for health status prediction. Computers & Electrical Engineering, 65, 393–399. https://doi.org/10.1016/j.compeleceng.2017.03.009
  4. Rahmani, A. M., Azhir, E., Ali, S., Mohammadi, M., Ahmed, O. H., Yassin Ghafour, M., Hasan Ahmed, S., & Hosseinzadeh, M. (2021). Artificial intelligence approaches and mechanisms for big data analytics: a systematic study. PeerJ Computer Science, 7, e488. https://doi.org/10.7717/peerj-cs.488
Comments

There is no comment, let's add the first one.

弦圈热门内容

[antd: Message] You are calling notice in render which will break in React 18 concurrent mode. Please trigger in effect instead.

I'm getting this error when using MessageWarning: [antd: Message] You are calling notice in render which will break in React 18 concurrent mode. Please trigger in effect instead.Here is my code:import { message } from 'antd'; export default function Page() { const [messageApi, contextHolder] = message.useMessage(); const res = await fetch("/api/...", { method: "POST", }); if (!res.ok) { messageApi.error("Error! Fail to login!"); } return ( <Home> ... </Home> ) }

Get connected with us on social networks! Twitter

©2024 Guangzhou Sinephony Technology Co., Ltd All Rights Reserved