Skip to main content

NoViableAltException(332@[2189:1: columnNameTypeOrPKOrFK : ( ( foreignKeyWithName ) | ( primaryKeyWithName ) | ( primaryKeyWithoutName ) | ( foreignKeyWithoutName ) | ( columnNameType ) );]) at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)

CREATE TABLE NYSE(exchange STRING, stock_symbol STRING, stock_date DATE,
stock_price_open FLOAT, stock_price_high FLOAT, stock_price_low FLOAT,   
stock_price_close FLOAT, stock_volume INT, stock_price_avg_close FLOAT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';


to

try giving like exchange_1 or escape it using ` to your first column name.

Try these steps:

USE your_DB_Name;

DROP TABLE IF EXISTS NYSE PURGE;

CREATE TABLE NYSE (`exchange` STRING, stock_symbol STRING, stock_date DATE, stock_price_open FLOAT, stock_price_high FLOAT, stock_price_low FLOAT, stock_price_close FLOAT, stock_volume INT, stock_price_avg_close FLOAT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ;

Comments

  1. It is nice blog Thank you provide important information and i am searching for same information to save my timeBig data hadoop online Training

    ReplyDelete

Post a Comment

Popular posts from this blog

Minus query in HIVE

It does not appear that HQL supports the MINUS operator What you want to do can be done with a LEFT JOIN or NOT EXISTS: SELECT x FROM abc LEFT JOIN bcd ON abc.x = bcd.x WHERE bcd.x IS NULL EDIT: Per comments below, NOT EXISTS is not supported. SELECT x FROM abc WHERE NOT EXISTS (SELECT x FROM bcd)

INCREMENTAL UPDATES IN HIVE TABLES

  In this blog I will focus on  Incremental load OR updating the exiting records and inserting new record  and  dynamic partition table loading . In BI world delta load/incremental load is very common activity.       We can implement the same on HIVE table using below steps. In a relation database we will perform updating the exiting records using unique index/primary key, but in Hive we have to do it in different way.  There are many ways to do it, let us see how we can do it using below steps. I am assuming that you have data in HDFS which will be updating hourly/daily from your sqoop/flume/kafka. Design consideration:   I am assuming that good temp space is maintained according to your data volume, so that we not facing temp space issue during the process. I am assuming that you have ingestion logic set to in your code for full load and incremental loads and data is available in HDFS.  Partition column value s...

Remove or Replace unwanted simple like space, $ and comma (,) from hive data

hive -e 'select regexp_replace("$60,825.48","\\$|\,","");' OK 60825.48 Time taken: 3.912 seconds, Fetched: 1 row(s) hive -e 'select regexp_replace("$60,825.48","[^0-9.]","");' 60825.48 Time taken: 2.249 seconds, Fetched: 1 row(s) substring from a string using the TRANSLATE function to replace the substring with the empty string. For your query it would become this: drop table vp_hiphop; create table vp_hiphop as select userid, ntext,        regexp_replace(regexp_replace(ntext, 'hip hop', 'hiphop'), 'rock music', 'rockmusic') as ntext1 from  vp_nlp_protext_males; drop table vp_hiphop; create table vp_hiphop as select  userid, ntext,         translate(ntext, ' ', '') as ntext1 from  vp_nlp_protext_males;