Parse csv with complex numbers written by Python numpy
I am not actually sure which Python features use this format (I think both numpy and pandas use it), or perhaps this is a part of a larger standard, but basically I have csvs with text that looks like
(-0.0053973628685668375-0.004476730131734169j),(0.005108157082444198-0.005597795916657765j),,,,,,,-298.0,-298.0,37293,-0.7617709422297042,0.7202575393833991,(0.001506298444580933-0.0035885955125266656j)
and I want to parse into a numeric array.
The real-valued scalar entries are easy (well I can do a str2double and it’s not super fast but it’s acceptable). The blanks are also not too bad because after a simple textscan with a comma delimeter I can find emptys and set to a desired value. But what the heck do I do with these rediculous complex number strings?
There’s loopy solutions with regexp or finding the real and imag components but they are too slow when dealing with hundreds of thousands of entries. I could also do things like find entries containing a "j" and process them separately, but is there something better?I am not actually sure which Python features use this format (I think both numpy and pandas use it), or perhaps this is a part of a larger standard, but basically I have csvs with text that looks like
(-0.0053973628685668375-0.004476730131734169j),(0.005108157082444198-0.005597795916657765j),,,,,,,-298.0,-298.0,37293,-0.7617709422297042,0.7202575393833991,(0.001506298444580933-0.0035885955125266656j)
and I want to parse into a numeric array.
The real-valued scalar entries are easy (well I can do a str2double and it’s not super fast but it’s acceptable). The blanks are also not too bad because after a simple textscan with a comma delimeter I can find emptys and set to a desired value. But what the heck do I do with these rediculous complex number strings?
There’s loopy solutions with regexp or finding the real and imag components but they are too slow when dealing with hundreds of thousands of entries. I could also do things like find entries containing a "j" and process them separately, but is there something better? I am not actually sure which Python features use this format (I think both numpy and pandas use it), or perhaps this is a part of a larger standard, but basically I have csvs with text that looks like
(-0.0053973628685668375-0.004476730131734169j),(0.005108157082444198-0.005597795916657765j),,,,,,,-298.0,-298.0,37293,-0.7617709422297042,0.7202575393833991,(0.001506298444580933-0.0035885955125266656j)
and I want to parse into a numeric array.
The real-valued scalar entries are easy (well I can do a str2double and it’s not super fast but it’s acceptable). The blanks are also not too bad because after a simple textscan with a comma delimeter I can find emptys and set to a desired value. But what the heck do I do with these rediculous complex number strings?
There’s loopy solutions with regexp or finding the real and imag components but they are too slow when dealing with hundreds of thousands of entries. I could also do things like find entries containing a "j" and process them separately, but is there something better? csv, numpy, pandas, parsing MATLAB Answers — New Questions