我有一个超过400 GB的文件,例如:
ID Data ...4000+columns
001 dsa
002 Data
… …
17201297 asdfghjkl
我希望按照ID对文件进行分块,以更快地检索数据,例如:
mylocation/0/0/1/data.json
mylocation/0/0/2/data.json
.....
mylocation/1/7/2/0/1/2/9/7/data.json
我的代码工作正常,但是无论我使用哪种循环结束器,都至少需要159,206 milisoconds才能完成文件创建的0.001%。
在那种情况下,多线程是否可以降低时间复杂度(例如一次写入100或1k文件)?
我当前的代码是:
int percent = 0;
File file = new File(fileLocation + fileName);
FileReader fileReader = new FileReader(file); // to read input file
BufferedReader bufReader = new BufferedReader(fileReader);
BufferedWriter fw = null;
LinkedHashMap<String,BufferedWriter> fileMap = new LinkedHashMap<>();
int dataCounter = 0;
while ((theline = bufReader.readLine()) != null) {
String generatedFilename = generatedFile + chrNo + "//" + directory + "gnomeV3.json";
Path generatedJsonFilePath = Paths.get(generatedFilename);
if (!Files.exists(generatedJsonFilePath)) {// create directory
Files.createDirectories(generatedJsonFilePath.getParent());
files.createFile(generatedJsonFilePath);
}
String jsonData = DBFileMaker(chrNo,theline,pos);
if (fileMap.containsKey(generatedFilename)) {
fw = fileMap.get(generatedFilename);
fw.write(jsonData);
} else {
fw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(generatedFilename)));
fw.write(jsonData);
fileMap.put(generatedFilename,fw);
}
if (dataCounter == 172 * percent) {// As I know my number of rows
long millisec = stopwatch.elapsed(TimeUnit.MILLISECONDS);
System.out.println("Upto: " + pos + " as " + (Double) (0.001 * percent)
+ "% completion successful." + " took: " + millisec + " miliseconds");
percent++;
}
dataCounter++;
}
for (BufferedWriter generatedFiles : fileMap.values()) {
generatedFiles.close();
}