In Hadoop the
map reduce component which can be the framework for performing and calculations
on the data in the structured file
systems..Hadoop Training in hyderabad
Finally speaking in hadoop is open source
framework for writing and we can running distributed number of the applications
that can process larger amount of big data files . Distributed of the computing
is wide of and varied fields, But this key distraction of the hadoop are like
a)
Accessible b)robust
c) scalable d)simple
package online.map.reduce;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.util.GenericOptionsParser;
public class MaxTemperature
{
public static class
MapForMaxTemperature extends Mapper<LongWritable,Text,Text,IntWritable>
{
public void
map(LongWritable k,Text v,Context con) throws IOException, InterruptedException
{
String line=v.toString();
String y= line.substring(5,9);
int t =
Integer.parseInt(line.substring(12,14));
con.write(new Text(y),new IntWritable(t));
}
}
public static class ReduceForMaxTemperature
extends Reducer<Text,IntWritable,Text,IntWritable>
{
public void
reduce(Text y,Iterable<IntWritable> tmps,Context con)
throws
IOException, InterruptedException
{
int m=0;
for(IntWritable t:tmps)
{
m=Math.max(m,t.get());
}
con.write(y,new IntWritable(m));
}
}
public static void
main(String[] args) throws Exception
{
Configuration c=new Configuration();
String[] files= new
GenericOptionsParser(c,args).getRemainingArgs();
Path p1=new Path(files[0]);
Path p2=new Path(files[1]);
Job j = new Job(c,"myjob");
j.setJarByClass(MaxTemperature.class);
j.setMapperClass(MapForMaxTemperature.class);
j.setCombinerClass(ReduceForMaxTemperature.class);
j.setReducerClass(ReduceForMaxTemperature.class);
j.setOutputKeyClass(Text.class);
j.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(j,p1);
FileOutputFormat.setOutputPath(j,p2);
System.exit(j.waitForCompletion(true) ? 0:1);
}
}
The comparing of the sql databases and hadoop is
Hadoop is framework and for processing data what can we makes it better
then standard of the relational databases of the workhouse of the data can
processing in the most of today number of applications..The main resign is that
sql ( structured query language ) is design target at the structured data..
"hadoop classroom training in hyderabad "Many of the hadoop initial applications can deal with the unstructured the data and such as the text format..from this prospective the bigdata hadoop provide lot of general paradigm and than structure query language ..For working only with the structured data and the compare can more nuaced,RStraining is one of the best training center for hadoop in Hyderabad…Here sessions will provide practically and cirtification oriented level..RStrainings is a best reviews for hadoop and other trainings courses..
All
companies taking corporate training and implementing the poc projects for
future purpose..In technologies wise bigdata hadoop have best reviews in the
world..This bigdata hadoop can be use like in many big organizations , eCommerce,
hospitality, pharma, software , manufacturing , finance systems and etc..Hadoop is best future for technologies and organization for next genarations..Hadoop Training in hyderabad