web-dev-qa-db-fra.com

FAILED: erreur d'exécution, retourne le code 1 de org.Apache.hadoop.Hive.ql.exec.mr.MapredLocalTask

Je suis nouveau sur Hadoop et j'essaie d'exécuter des requêtes de jointure sur Hive. J'ai créé deux tables (table1 et table2). J'ai exécuté une requête de jointure mais j'ai reçu le message d'erreur suivant:

FAILED: Execution Error, return code 1 from org.Apache.hadoop.Hive.ql.exec.mr.MapredLocalTask

Cependant, lorsque j'exécute cette requête dans l'interface utilisateur Hive, la requête est exécutée et les résultats sont corrects. Quelqu'un peut-il aider à expliquer ce qui ne va pas?

2
Gaurav Pandey

Il suffit de mettre cette commande avant de requête:

SET Hive.auto.convert.join=false;

Cela fonctionne vraiment!

4
Sveta Fishka

Je viens d'ajouter ce qui suit avant d'exécuter ma requête et cela a fonctionné.

SET Hive.auto.convert.join=false;
3
Gaurav Pandey

Essayez de définir le paramètre AuthMech sur la connexion 

je l'ai mis à 2 et défini le nom d'utilisateur 

qui a résolu mon problème sur les ctas

Cordialement, Okan

1
Okan KOCATÜRK

Je faisais également face au problème sur Cloudera Quick Start VM - 5.12, qui a été résolu en exécutant l'instruction ci-dessous sur Hive Prompt:

SET Hive.auto.convert.join=false;

J'espère que les informations ci-dessous seront plus utiles:

Étape 1: Importer toutes les tables de la base de données retail_db de MySQL

sqoop import-all-tables \
--connect jdbc:mysql://quickstart.cloudera:3306/retail_db \
--username retail_dba \
--password cloudera \
--num-mappers 1 \
--warehouse-dir /user/cloudera/sqoop/import-all-tables-text \
--as-textfile

Étape 2: Création de la base de données appelée retail_db et des tables requises dans Hive

create database retail_db;
use retail_db;

create external table categories(
  category_id int,
  category_department_id int,
  category_name string)
row format delimited 
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/categories';

create external table customers(
  customer_id int,
  customer_fname string,
  customer_lname string,
  customer_email string,
  customer_password string,
  customer_street string,
  customer_city string,
  customer_state string,
  customer_zipcode string)
row format delimited 
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/customers';

create external table departments(
  department_id int,
  department_name string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/departments';

create external table order_items(
  order_item_id int,
  order_item_order_id int,
  order_item_product_id int,
  order_item_quantity int,
  order_item_subtotal float,
  order_item_product_price float)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/order_items';

create external table orders(
  order_id int,
  order_date string,
  order_customer_id int,
  order_status string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/orders';

create external table products(
  product_id int,
  product_category_id int,
  product_name string,
  product_description string,
  product_price float,
  product_image string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/products';

Étape 3: Exécuter la requête JOIN

SET Hive.cli.print.current.db=true;

select o.order_date, sum(oi.order_item_subtotal)
from orders o join order_items oi on (o.order_id = oi.order_item_order_id)
group by o.order_date 
limit 10;

La requête ci-dessus donnait le problème ci-dessous:

ID de la requête = cloudera_20171029182323_6eedd682-256b-466c-b2e5-58ea100715fb Nombre total de travaux = 1 FAILED: Erreur d’exécution, code de retour 1 de org.Apache.hadoop.Hive.ql.exp.m.

Étape 4: Le problème ci-dessus a été résolu en exécutant la déclaration ci-dessous sur l'invite Hive:

SET Hive.auto.convert.join=false;

Étape 5: Résultat de la requête

select o.order_date, sum(oi.order_item_subtotal)
from orders o join order_items oi on (o.order_id = oi.order_item_order_id)
group by o.order_date 
limit 10;

Query ID = cloudera_20171029182525_cfc70553-89d2-4c61-8a14-4bbeecadb3cf
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set Hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set Hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1509278183296_0005, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1509278183296_0005/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1509278183296_0005
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2017-10-29 18:25:19,861 Stage-1 map = 0%,  reduce = 0%
2017-10-29 18:25:26,181 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 2.72 sec
2017-10-29 18:25:27,240 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.42 sec
2017-10-29 18:25:32,479 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 8.01 sec
MapReduce Total cumulative CPU time: 8 seconds 10 msec
Ended Job = job_1509278183296_0005
Launching Job 2 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set Hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set Hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1509278183296_0006, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1509278183296_0006/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1509278183296_0006
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2017-10-29 18:25:38,676 Stage-2 map = 0%,  reduce = 0%
2017-10-29 18:25:43,925 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.85 sec
2017-10-29 18:25:49,142 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.13 sec
MapReduce Total cumulative CPU time: 2 seconds 130 msec
Ended Job = job_1509278183296_0006
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 8.01 sec   HDFS Read: 8422614 HDFS Write: 17364 SUCCESS
Stage-Stage-2: Map: 1  Reduce: 1   Cumulative CPU: 2.13 sec   HDFS Read: 22571 HDFS Write: 407 SUCCESS
Total MapReduce CPU Time Spent: 10 seconds 140 msec
OK
2013-07-25 00:00:00.0   68153.83132743835
2013-07-26 00:00:00.0   136520.17266082764
2013-07-27 00:00:00.0   101074.34193611145
2013-07-28 00:00:00.0   87123.08192253113
2013-07-29 00:00:00.0   137287.09244918823
2013-07-30 00:00:00.0   102745.62186431885
2013-07-31 00:00:00.0   131878.06256484985
2013-08-01 00:00:00.0   129001.62241744995
2013-08-02 00:00:00.0   109347.00200462341
2013-08-03 00:00:00.0   95266.89186286926
Time taken: 35.721 seconds, Fetched: 10 row(s)
0
Shrey