I'm trying to read a fairly large CSV file with Pandas and split it up into two random chunks, one of which being 10% of the data and the other being 90%.
Here's my current attempt:
rows = data.index row_count = len(rows) random.shuffle(list(rows)) data.reindex(rows) training_data = data[row_count // 10:] testing_data = data[:row_count // 10]
For some reason,
sklearn throws this error when I try to use one of these resulting DataFrame objects inside of a SVM classifier:
IndexError: each subindex must be either a slice, an integer, Ellipsis, or newaxis
I think I'm doing it wrong. Is there a better way to do this?
What version of pandas are you using? For me your code works fine (i`m on git master).
Another approach could be:
In : import pandas In : import random In : df = pandas.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) In : rows = random.sample(df.index, 10) In : df_10 = df.ix[rows] In : df_90 = df.drop(rows)
Newer version (from 0.16.1 on) supports this directly: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sample.html
I have found that
np.random.choice() new in NumPy 1.7.0 works quite well for this.
For example you can pass the index values from a DataFrame and and the integer 10 to select 10 random uniformly sampled rows.
rows = np.random.choice(df.index.values, 10) sampled_df = df.ix[rows]