Actual Test Materials
https://blog.actualtests4sure.com/2022/06/associate-developer-apache-spark-dumps-updated-jun-27-2022-practice-test-and-179-unique-questions-q35-q53/
Export date: Fri Nov 15 18:45:05 2024 / +0000 GMT

Associate-Developer-Apache-Spark Dumps Updated Jun 27, 2022 Practice Test and 179 unique questions [Q35-Q53]




Associate-Developer-Apache-Spark Dumps Updated Jun 27, 2022 Practice Test and 179 unique questions

2022 Latest 100% Exam Passing Ratio - Associate-Developer-Apache-Spark Dumps PDF

NEW QUESTION 35
Which of the following describes the conversion of a computational query into an execution plan in Spark?

 
 
 
 
 

NEW QUESTION 36
The code block shown below should return a DataFrame with all columns of DataFrame transactionsDf, but only maximum 2 rows in which column productId has at least the value 2. Choose the answer that correctly fills the blanks in the code block to accomplish this.
transactionsDf.__1__(__2__).__3__

 
 
 
 
 

NEW QUESTION 37
Which of the following statements about the differences between actions and transformations is correct?

 
 
 
 
 

NEW QUESTION 38
Which of the following code blocks reads in the parquet file stored at location filePath, given that all columns in the parquet file contain only whole numbers and are stored in the most appropriate format for this kind of data?

 
 
 
 
 

NEW QUESTION 39
Which of the following code blocks uses a schema fileSchema to read a parquet file at location filePath into a DataFrame?

 
 
 
 
 

NEW QUESTION 40
Which of the following code blocks immediately removes the previously cached DataFrame transactionsDf from memory and disk?

 
 
 
 
 

NEW QUESTION 41
Which of the following code blocks creates a new DataFrame with 3 columns, productId, highest, and lowest, that shows the biggest and smallest values of column value per value in column productId from DataFrame transactionsDf?
Sample of DataFrame transactionsDf:
1.+————-+———+—–+——-+———+—-+
2.|transactionId|predError|value|storeId|productId| f|
3.+————-+———+—–+——-+———+—-+
4.| 1| 3| 4| 25| 1|null|
5.| 2| 6| 7| 2| 2|null|
6.| 3| 3| null| 25| 3|null|
7.| 4| null| null| 3| 2|null|
8.| 5| null| null| null| 2|null|
9.| 6| 3| 2| 25| 2|null|
10.+————-+———+—–+——-+———+—-+

 
 
 
 
 

NEW QUESTION 42
Which of the following code blocks returns a copy of DataFrame itemsDf where the column supplier has been renamed to manufacturer?

 
 
 
 
 

NEW QUESTION 43
The code block displayed below contains an error. The code block should produce a DataFrame with color as the only column and three rows with color values of red, blue, and green, respectively.
Find the error.
Code block:
1.spark.createDataFrame([(“red”,), (“blue”,), (“green”,)], “color”)
Instead of calling spark.createDataFrame, just DataFrame should be called.

 
 
 
 

NEW QUESTION 44
The code block shown below should return a new 2-column DataFrame that shows one attribute from column attributes per row next to the associated itemName, for all suppliers in column supplier whose name includes Sports. Choose the answer that correctly fills the blanks in the code block to accomplish this.
Sample of DataFrame itemsDf:
1.+——+———————————-+—————————–+——————-+
2.|itemId|itemName |attributes |supplier |
3.+——+———————————-+—————————–+——————-+
4.|1 |Thick Coat for Walking in the Snow|[blue, winter, cozy] |Sports Company Inc.|
5.|2 |Elegant Outdoors Summer Dress |[red, summer, fresh, cooling]|YetiX |
6.|3 |Outdoors Backpack |[green, summer, travel] |Sports Company Inc.|
7.+——+———————————-+—————————–+——————-+ Code block:
itemsDf.__1__(__2__).select(__3__, __4__)

 
 
 
 
 

NEW QUESTION 45
Which of the following statements about Spark’s DataFrames is incorrect?

 
 
 
 
 

NEW QUESTION 46
The code block shown below should return all rows of DataFrame itemsDf that have at least 3 items in column itemNameElements. Choose the answer that correctly fills the blanks in the code block to accomplish this.
Example of DataFrame itemsDf:
1.+——+———————————-+——————-+——————————————+
2.|itemId|itemName |supplier |itemNameElements |
3.+——+———————————-+——————-+——————————————+
4.|1 |Thick Coat for Walking in the Snow|Sports Company Inc.|[Thick, Coat, for, Walking, in, the, Snow]|
5.|2 |Elegant Outdoors Summer Dress |YetiX |[Elegant, Outdoors, Summer, Dress] |
6.|3 |Outdoors Backpack |Sports Company Inc.|[Outdoors, Backpack] |
7.+——+———————————-+——————-+——————————————+ Code block:
itemsDf.__1__(__2__(__3__)__4__)

 
 
 
 
 

NEW QUESTION 47
The code block displayed below contains an error. The code block should merge the rows of DataFrames transactionsDfMonday and transactionsDfTuesday into a new DataFrame, matching column names and inserting null values where column names do not appear in both DataFrames. Find the error.
Sample of DataFrame transactionsDfMonday:
1.+————-+———+—–+——-+———+—-+
2.|transactionId|predError|value|storeId|productId| f|
3.+————-+———+—–+——-+———+—-+
4.| 5| null| null| null| 2|null|
5.| 6| 3| 2| 25| 2|null|
6.+————-+———+—–+——-+———+—-+
Sample of DataFrame transactionsDfTuesday:
1.+——-+————-+———+—–+
2.|storeId|transactionId|productId|value|
3.+——-+————-+———+—–+
4.| 25| 1| 1| 4|
5.| 2| 2| 2| 7|
6.| 3| 4| 2| null|
7.| null| 5| 2| null|
8.+——-+————-+———+—–+
Code block:
sc.union([transactionsDfMonday, transactionsDfTuesday])

 
 
 
 
 

NEW QUESTION 48
Which of the following describes the role of tasks in the Spark execution hierarchy?

 
 
 
 
 

NEW QUESTION 49
The code block shown below should write DataFrame transactionsDf to disk at path csvPath as a single CSV file, using tabs (t characters) as separators between columns, expressing missing values as string n/a, and omitting a header row with column names. Choose the answer that correctly fills the blanks in the code block to accomplish this.
transactionsDf.__1__.write.__2__(__3__, ” “).__4__.__5__(csvPath)

 
 
 
 

NEW QUESTION 50
Which of the following describes a shuffle?

 
 
 
 
 

NEW QUESTION 51
The code block displayed below contains an error. The code block is intended to join DataFrame itemsDf with the larger DataFrame transactionsDf on column itemId. Find the error.
Code block:
transactionsDf.join(itemsDf, “itemId”, how=”broadcast”)

 
 
 
 
 

NEW QUESTION 52
The code block displayed below contains an error. When the code block below has executed, it should have divided DataFrame transactionsDf into 14 parts, based on columns storeId and transactionDate (in this order). Find the error.
Code block:
transactionsDf.coalesce(14, (“storeId”, “transactionDate”))

 
 
 
 
 

NEW QUESTION 53
Which of the following options describes the responsibility of the executors in Spark?

 
 
 
 
 

Verified Associate-Developer-Apache-Spark dumps Q&As - 100% Pass from Actualtests4sure: https://www.actualtests4sure.com/Associate-Developer-Apache-Spark-test-questions.html 1

Links:
  1. https://www.actualtests4sure.com/Associate-Develop er-Apache-Spark-test-questions.html
Post date: 2022-06-27 09:47:07
Post date GMT: 2022-06-27 09:47:07

Post modified date: 2022-06-27 09:47:07
Post modified date GMT: 2022-06-27 09:47:07

Export date: Fri Nov 15 18:45:05 2024 / +0000 GMT
This page was exported from Actual Test Materials [ http://blog.actualtests4sure.com ]