df.to_csv структурирование вывода

Я пытаюсь написать вывод в csv но я получаю другой формат.

Что я могу изменить для получения чистого результата.

Код:

 import pandas as pd from datetime import datetime import csv df = pd.read_csv('one_hour.csv') df.columns = ['date', 'startTime', 'endTime', 'day', 'count', 'unique'] count_med = df.groupby(['date'])[['count']].median() unique_med = df.groupby(['date'])[['unique']].median() date_count = df['date'].nunique() #print count_med #print unique_med cols = ['date_count', 'count_med', 'unique_med'] outf = pd.DataFrame([[date_count, count_med, unique_med]], columns = cols) outf.to_csv('date_med.csv', index=False, header=False) 

Вход: только несколько строк из огромного файла данных.

 2004-01-05,21:00:00,22:00:00,Mon,16553,783 2004-01-05,22:00:00,23:00:00,Mon,18944,790 2004-01-05,23:00:00,00:00:00,Mon,17534,750 2004-01-06,00:00:00,01:00:00,Tue,17262,747 2004-01-06,01:00:00,02:00:00,Tue,19072,777 2004-01-06,02:00:00,03:00:00,Tue,18275,785 2004-01-06,03:00:00,04:00:00,Tue,13589,757 2004-01-06,04:00:00,05:00:00,Tue,16053,735 2004-01-06,05:00:00,06:00:00,Tue,11440,636 

Вывод

 63," count date 2004-01-05 10766.0 2004-01-06 11530.0 2004-01-07 11270.0 2004-01-08 14819.5 2004-01-09 12933.5 2004-01-10 10088.0 2004-01-11 10923.0 2004-02-03 14760.5 ... ... 2004-02-07 10131.5 2004-02-08 11184.0 [63 rows x 1 columns]"," unique date 2004-01-05 633.0 2004-01-06 741.0 2004-01-07 752.5 2004-02-03 779.5 ... ... 2004-02-07 643.5 [63 rows x 1 columns]" 

Но ожидаемый результат не должен быть таким.

Ожидаемый результат: округленные значения вместе с датой

 2004-01-05,10766,633 2004-01-06,11530,741 2004-01-07,11270,752 

попробуй это:

 cols = ['date', 'startTime', 'endTime', 'day', 'count', 'unique'] df = pd.read_csv(fn, header=None, names=cols) df.groupby(['date'])[['count','unique']].agg({'count':'median','unique':'median'}).round().to_csv('d:/temp/out.csv', header=None) 

out.csv:

 2004-01-05,764,17044.0 2004-01-06,757,17262.0 

Тебе нужно:

 import pandas as pd import io temp=u"""2004-01-05,21:00:00,22:00:00,Mon,16553,783 2004-01-05,22:00:00,23:00:00,Mon,18944,790 2004-01-05,23:00:00,00:00:00,Mon,17534,750 2004-01-06,00:00:00,01:00:00,Tue,17262,747 2004-01-06,01:00:00,02:00:00,Tue,19072,777 2004-01-06,02:00:00,03:00:00,Tue,18275,785 2004-01-06,03:00:00,04:00:00,Tue,13589,757 2004-01-06,04:00:00,05:00:00,Tue,16053,735 2004-01-06,05:00:00,06:00:00,Tue,11440,636""" #after testing replace io.StringIO(temp) to filename df = pd.read_csv(io.StringIO(temp), parse_dates=[0], names=['date', 'startTime', 'endTime', 'day', 'count', 'unique']) print (df) outf = df.groupby('date')['count', 'unique'].median().round().astype(int) print (outf) count unique date 2004-01-05 17534 783 2004-01-06 16658 752 outf.to_csv('date_med.csv', header=False) 

Сроки :

 In [20]: %timeit df.groupby('date')['count', 'unique'].median().round().astype(int) The slowest run took 4.47 times longer than the fastest. This could mean that an intermediate result is being cached. 100 loops, best of 3: 2.67 ms per loop In [21]: %timeit df.groupby(['date'])[['count','unique']].agg({'count':'median','unique':'median'}).round().astype(int) The slowest run took 4.44 times longer than the fastest. This could mean that an intermediate result is being cached. 100 loops, best of 3: 3.64 ms per loop