Accessing data, the DataFrame class, reading/writing files

We can load the pandas package by using the usual import syntax.

In [1]:
import pandas as pd

We can read from file with the function called read_csv(). The returned table is going to be of the type DataFrame.

In [2]:
df = pd.read_csv("data/smallpeople.csv")
In [3]:
type(df)
Out[3]:
pandas.core.frame.DataFrame

If we only print the table, we have a nicely formatted output:

  • the first row contains the names of the columns
  • the first column contains the identifiers of the rows, that is the so-called index column
  • the rest of the table contains the actual data
In [4]:
df
Out[4]:
Name Grade_Paul Grade_John Gender Age Date
0 Valentine 2 . male 20 12:31
1 Sarah 4 4 female 22 13:20
2 Stephen 5 4 male 19 12:35
3 Jane 3 5 female 20 14:50
4 Charles 4 . male 21 14:55

We can access one column of a table by using its name, as if it were (it is, actually) an element of a dictionary.

In [5]:
df['Grade_Paul']
Out[5]:
0    2
1    4
2    5
3    3
4    4
Name: Grade_Paul, dtype: int64

The returned column has a complex structure, that still contains the index and the name of the column. This type is the pandas.Series.

In [6]:
type(df['Grade_Paul'])
Out[6]:
pandas.core.series.Series

Most of the time, we can use them similarly to numpy arrays.

In [7]:
df['Grade_Paul']**2
Out[7]:
0     4
1    16
2    25
3     9
4    16
Name: Grade_Paul, dtype: int64

But we can also obtain the array storing the data.

In [8]:
df['Grade_Paul'].values
Out[8]:
array([2, 4, 5, 3, 4])

We can also use the .loc[] construction, that is able to get both columns and rows by their names.

Column

In [9]:
df.loc[:,'Grade_Paul']
Out[9]:
0    2
1    4
2    5
3    3
4    4
Name: Grade_Paul, dtype: int64

Row. Rows are also Series objects.

In [10]:
df.loc[1,:]
Out[10]:
Name           Sarah
Grade_Paul         4
Grade_John         4
Gender        female
Age               22
Date           13:20
Name: 1, dtype: object

To get only one element:

In [11]:
df.loc[0,'Grade_Paul']
Out[11]:
2

Later, we'll get back to the indexing of these DataFrames. Now, let's have a look at the file IO!

Input

read_csv()

One of the fastest and most flexible tools of Python is the read_csv() function of the pandas module. Let's have a look at some of its most important arguments!

Character separating columns: sep .

Most of the time, the columns in our tables are separated by a tabulator or a comma in simple text files, that we can set by using the sep keyword argument.

In [12]:
df=pd.read_csv("data/smallpeople.csv",sep=',')
df.head()
Out[12]:
Name Grade_Paul Grade_John Gender Age Date
0 Valentine 2 . male 20 12:31
1 Sarah 4 4 female 22 13:20
2 Stephen 5 4 male 19 12:35
3 Jane 3 5 female 20 14:50
4 Charles 4 . male 21 14:55

If we set it wrong, then usually, we'll only have one column in the table.

In [13]:
df=pd.read_csv("data/smallpeople.csv",sep=' ')
df
Out[13]:
Name,Grade_Paul,Grade_John,Gender,Age,Date
0 Valentine,2,.,male,20,12:31
1 Sarah,4,4,female,22,13:20
2 Stephen,5,4,male,19,12:35
3 Jane,3,5,female,20,14:50
4 Charles,4,.,male,21,14:55

Or the function throws an error about not having the same number of columns in each row.

In [15]:
df=pd.read_csv("data/smallpeople.csv",sep='p')
---------------------------------------------------------------------------
ParserError                               Traceback (most recent call last)
<ipython-input-15-ca8a2649d88e> in <module>()
----> 1 df=pd.read_csv("data/smallpeople.csv",sep='p')

~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
    707                     skip_blank_lines=skip_blank_lines)
    708 
--> 709         return _read(filepath_or_buffer, kwds)
    710 
    711     parser_f.__name__ = name

~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    453 
    454     try:
--> 455         data = parser.read(nrows)
    456     finally:
    457         parser.close()

~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py in read(self, nrows)
   1067                 raise ValueError('skipfooter not supported for iteration')
   1068 
-> 1069         ret = self._engine.read(nrows)
   1070 
   1071         if self.options.get('as_recarray'):

~/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py in read(self, nrows)
   1837     def read(self, nrows=None):
   1838         try:
-> 1839             data = self._reader.read(nrows)
   1840         except StopIteration:
   1841             if self._first_chunk:

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 1 fields in line 4, saw 2

We can set if there is a header, and at which line it is located in the textfile.

In [17]:
df=pd.read_csv("data/smallpeople.csv",header=0)
df
Out[17]:
Name Grade_Paul Grade_John Gender Age Date
0 Valentine 2 . male 20 12:31
1 Sarah 4 4 female 22 13:20
2 Stephen 5 4 male 19 12:35
3 Jane 3 5 female 20 14:50
4 Charles 4 . male 21 14:55

If there is no header, then the first row of the textfile is going to be the first row of the table.

In [18]:
df=pd.read_csv("data/smallpeople.csv",header=None)
df
Out[18]:
0 1 2 3 4 5
0 Name Grade_Paul Grade_John Gender Age Date
1 Valentine 2 . male 20 12:31
2 Sarah 4 4 female 22 13:20
3 Stephen 5 4 male 19 12:35
4 Jane 3 5 female 20 14:50
5 Charles 4 . male 21 14:55

If we set a later line such as 3, then the table is only going to begin thereafter, and all former lines are skipped.

In [19]:
df=pd.read_csv("data/smallpeople.csv",header=3)
df
Out[19]:
Stephen 5 4 male 19 12:35
0 Jane 3 5 female 20 14:50
1 Charles 4 . male 21 14:55

index_col (index column)

Index column is displayed slightly differently.

In [21]:
df=pd.read_csv("data/smallpeople.csv",index_col='Name')
df.head()
Out[21]:
Grade_Paul Grade_John Gender Age Date
Name
Valentine 2 . male 20 12:31
Sarah 4 4 female 22 13:20
Stephen 5 4 male 19 12:35
Jane 3 5 female 20 14:50
Charles 4 . male 21 14:55

This column behaves similary to the previous index consisting of numbers.

In [22]:
df['Grade_Paul']
Out[22]:
Name
Valentine    2
Sarah        4
Stephen      5
Jane         3
Charles      4
Name: Grade_Paul, dtype: int64
In [23]:
df.loc['Valentine',:]
Out[23]:
Grade_Paul        2
Grade_John        .
Gender         male
Age              20
Date          12:31
Name: Valentine, dtype: object

nrows

Reading only the first few columns of a very big datafile.

In [24]:
df=pd.read_csv("data/smallpeople.csv",nrows=2)
df
Out[24]:
Name Grade_Paul Grade_John Gender Age Date
0 Valentine 2 . male 20 12:31
1 Sarah 4 4 female 22 13:20

na_values

If someone sets missing values for example by writing a . instead of the missing data, we can tell the csv reader to set it to NaN. This can be important when computing averages etc.

In [25]:
df=pd.read_csv("data/smallpeople.csv",na_values=['.'])
df
Out[25]:
Name Grade_Paul Grade_John Gender Age Date
0 Valentine 2 NaN male 20 12:31
1 Sarah 4 4.0 female 22 13:20
2 Stephen 5 4.0 male 19 12:35
3 Jane 3 5.0 female 20 14:50
4 Charles 4 NaN male 21 14:55

parse_dates (handling dates)

The default behaviour is to do nothing with the columns that look like a date or a time.

In [26]:
df=pd.read_csv("data/smallpeople.csv")
df
Out[26]:
Name Grade_Paul Grade_John Gender Age Date
0 Valentine 2 . male 20 12:31
1 Sarah 4 4 female 22 13:20
2 Stephen 5 4 male 19 12:35
3 Jane 3 5 female 20 14:50
4 Charles 4 . male 21 14:55
In [28]:
df['Date']
Out[28]:
0    12:31
1    13:20
2    12:35
3    14:50
4    14:55
Name: Date, dtype: object

But we can tell read_csv() to do so.

In [29]:
df=pd.read_csv("data/smallpeople.csv",parse_dates=['Date'])
df
Out[29]:
Name Grade_Paul Grade_John Gender Age Date
0 Valentine 2 . male 20 2019-02-11 12:31:00
1 Sarah 4 4 female 22 2019-02-11 13:20:00
2 Stephen 5 4 male 19 2019-02-11 12:35:00
3 Jane 3 5 female 20 2019-02-11 14:50:00
4 Charles 4 . male 21 2019-02-11 14:55:00
In [30]:
df['Date']
Out[30]:
0   2019-02-11 12:31:00
1   2019-02-11 13:20:00
2   2019-02-11 12:35:00
3   2019-02-11 14:50:00
4   2019-02-11 14:55:00
Name: Date, dtype: datetime64[ns]

compression (reading compressed files)

Data is smaller and faster to read when it is compressed.

In [31]:
df=pd.read_csv("data/smallpeople.csv.gz")
df
Out[31]:
Name Grade_Paul Grade_John Gender Age Date
0 Valentine 2 . male 20 12:31
1 Sarah 4 4 female 22 13:20
2 Stephen 5 4 male 19 12:35
3 Jane 3 5 female 20 14:50
4 Charles 4 . male 21 14:55

We can read big files in chuncks

In [32]:
for ch in pd.read_csv("data/smallpeople.csv.gz",iterator=True,chunksize=2):
    print(ch)
    # here we do something with it
        Name  Grade_Paul Grade_John  Gender  Age   Date
0  Valentine           2          .    male   20  12:31
1      Sarah           4          4  female   22  13:20
      Name  Grade_Paul  Grade_John  Gender  Age   Date
2  Stephen           5           4    male   19  12:35
3     Jane           3           5  female   20  14:50
      Name  Grade_Paul Grade_John Gender  Age   Date
4  Charles           4          .   male   21  14:55

We can read big files faster, if we only need a handful of columns:

In [34]:
pd.read_csv("data/smallpeople.csv.gz",usecols=['Name','Gender'])
Out[34]:
Name Gender
0 Valentine male
1 Sarah female
2 Stephen male
3 Jane female
4 Charles male

Further settings:

In [34]:
help(pd.read_csv)
Help on function read_csv in module pandas.io.parsers:

read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=False, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, skip_footer=0, doublequote=True, delim_whitespace=False, as_recarray=False, compact_ints=False, use_unsigned=False, low_memory=True, buffer_lines=None, memory_map=False, float_precision=None)
    Read CSV (comma-separated) file into DataFrame
    
    Also supports optionally iterating or breaking of the file
    into chunks.
    
    Additional help can be found in the `online docs for IO Tools
    <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.
    
    Parameters
    ----------
    filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any object with a read() method (such as a file handle or StringIO)
        The string could be a URL. Valid URL schemes include http, ftp, s3, and
        file. For file URLs, a host is expected. For instance, a local file could
        be file ://localhost/path/to/table.csv
    sep : str, default ','
        Delimiter to use. If sep is None, the C engine cannot automatically detect
        the separator, but the Python parsing engine can, meaning the latter will
        be used automatically. In addition, separators longer than 1 character and
        different from ``'\s+'`` will be interpreted as regular expressions and
        will also force the use of the Python parsing engine. Note that regex
        delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``
    delimiter : str, default ``None``
        Alternative argument name for sep.
    delim_whitespace : boolean, default False
        Specifies whether or not whitespace (e.g. ``' '`` or ``'    '``) will be
        used as the sep. Equivalent to setting ``sep='\s+'``. If this option
        is set to True, nothing should be passed in for the ``delimiter``
        parameter.
    
        .. versionadded:: 0.18.1 support for the Python parser.
    
    header : int or list of ints, default 'infer'
        Row number(s) to use as the column names, and the start of the data.
        Default behavior is as if set to 0 if no ``names`` passed, otherwise
        ``None``. Explicitly pass ``header=0`` to be able to replace existing
        names. The header can be a list of integers that specify row locations for
        a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not
        specified will be skipped (e.g. 2 in this example is skipped). Note that
        this parameter ignores commented lines and empty lines if
        ``skip_blank_lines=True``, so header=0 denotes the first line of data
        rather than the first line of the file.
    names : array-like, default None
        List of column names to use. If file contains no header row, then you
        should explicitly pass header=None. Duplicates in this list are not
        allowed unless mangle_dupe_cols=True, which is the default.
    index_col : int or sequence or False, default None
        Column to use as the row labels of the DataFrame. If a sequence is given, a
        MultiIndex is used. If you have a malformed file with delimiters at the end
        of each line, you might consider index_col=False to force pandas to _not_
        use the first column as the index (row names)
    usecols : array-like or callable, default None
        Return a subset of the columns. If array-like, all elements must either
        be positional (i.e. integer indices into the document columns) or strings
        that correspond to column names provided either by the user in `names` or
        inferred from the document header row(s). For example, a valid array-like
        `usecols` parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
    
        If callable, the callable function will be evaluated against the column
        names, returning names where the callable function evaluates to True. An
        example of a valid callable argument would be ``lambda x: x.upper() in
        ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
        parsing time and lower memory usage.
    as_recarray : boolean, default False
        DEPRECATED: this argument will be removed in a future version. Please call
        `pd.read_csv(...).to_records()` instead.
    
        Return a NumPy recarray instead of a DataFrame after parsing the data.
        If set to True, this option takes precedence over the `squeeze` parameter.
        In addition, as row indices are not available in such a format, the
        `index_col` parameter will be ignored.
    squeeze : boolean, default False
        If the parsed data only contains one column then return a Series
    prefix : str, default None
        Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ...
    mangle_dupe_cols : boolean, default True
        Duplicate columns will be specified as 'X.0'...'X.N', rather than
        'X'...'X'. Passing in False will cause data to be overwritten if there
        are duplicate names in the columns.
    dtype : Type name or dict of column -> type, default None
        Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32}
        Use `str` or `object` to preserve and not interpret dtype.
        If converters are specified, they will be applied INSTEAD
        of dtype conversion.
    engine : {'c', 'python'}, optional
        Parser engine to use. The C engine is faster while the python engine is
        currently more feature-complete.
    converters : dict, default None
        Dict of functions for converting values in certain columns. Keys can either
        be integers or column labels
    true_values : list, default None
        Values to consider as True
    false_values : list, default None
        Values to consider as False
    skipinitialspace : boolean, default False
        Skip spaces after delimiter.
    skiprows : list-like or integer or callable, default None
        Line numbers to skip (0-indexed) or number of lines to skip (int)
        at the start of the file.
    
        If callable, the callable function will be evaluated against the row
        indices, returning True if the row should be skipped and False otherwise.
        An example of a valid callable argument would be ``lambda x: x in [0, 2]``.
    skipfooter : int, default 0
        Number of lines at bottom of file to skip (Unsupported with engine='c')
    skip_footer : int, default 0
        DEPRECATED: use the `skipfooter` parameter instead, as they are identical
    nrows : int, default None
        Number of rows of file to read. Useful for reading pieces of large files
    na_values : scalar, str, list-like, or dict, default None
        Additional strings to recognize as NA/NaN. If dict passed, specific
        per-column NA values.  By default the following values are interpreted as
        NaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',
        '1.#IND', '1.#QNAN', 'N/A', 'NA', 'NULL', 'NaN', 'nan'`.
    keep_default_na : bool, default True
        If na_values are specified and keep_default_na is False the default NaN
        values are overridden, otherwise they're appended to.
    na_filter : boolean, default True
        Detect missing value markers (empty strings and the value of na_values). In
        data without any NAs, passing na_filter=False can improve the performance
        of reading a large file
    verbose : boolean, default False
        Indicate number of NA values placed in non-numeric columns
    skip_blank_lines : boolean, default True
        If True, skip over blank lines rather than interpreting as NaN values
    parse_dates : boolean or list of ints or names or list of lists or dict, default False
    
        * boolean. If True -> try parsing the index.
        * list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
          each as a separate date column.
        * list of lists. e.g.  If [[1, 3]] -> combine columns 1 and 3 and parse as
          a single date column.
        * dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result
          'foo'
    
        If a column or index contains an unparseable date, the entire column or
        index will be returned unaltered as an object data type. For non-standard
        datetime parsing, use ``pd.to_datetime`` after ``pd.read_csv``
    
        Note: A fast-path exists for iso8601-formatted dates.
    infer_datetime_format : boolean, default False
        If True and parse_dates is enabled, pandas will attempt to infer the format
        of the datetime strings in the columns, and if it can be inferred, switch
        to a faster method of parsing them. In some cases this can increase the
        parsing speed by 5-10x.
    keep_date_col : boolean, default False
        If True and parse_dates specifies combining multiple columns then
        keep the original columns.
    date_parser : function, default None
        Function to use for converting a sequence of string columns to an array of
        datetime instances. The default uses ``dateutil.parser.parser`` to do the
        conversion. Pandas will try to call date_parser in three different ways,
        advancing to the next if an exception occurs: 1) Pass one or more arrays
        (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the
        string values from the columns defined by parse_dates into a single array
        and pass that; and 3) call date_parser once for each row using one or more
        strings (corresponding to the columns defined by parse_dates) as arguments.
    dayfirst : boolean, default False
        DD/MM format dates, international and European format
    iterator : boolean, default False
        Return TextFileReader object for iteration or getting chunks with
        ``get_chunk()``.
    chunksize : int, default None
        Return TextFileReader object for iteration.
        See the `IO Tools docs
        <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
        for more information on ``iterator`` and ``chunksize``.
    compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
        For on-the-fly decompression of on-disk data. If 'infer', then use gzip,
        bz2, zip or xz if filepath_or_buffer is a string ending in '.gz', '.bz2',
        '.zip', or 'xz', respectively, and no decompression otherwise. If using
        'zip', the ZIP file must contain only one data file to be read in.
        Set to None for no decompression.
    
        .. versionadded:: 0.18.1 support for 'zip' and 'xz' compression.
    
    thousands : str, default None
        Thousands separator
    decimal : str, default '.'
        Character to recognize as decimal point (e.g. use ',' for European data).
    float_precision : string, default None
        Specifies which converter the C engine should use for floating-point
        values. The options are `None` for the ordinary converter,
        `high` for the high-precision converter, and `round_trip` for the
        round-trip converter.
    lineterminator : str (length 1), default None
        Character to break file into lines. Only valid with C parser.
    quotechar : str (length 1), optional
        The character used to denote the start and end of a quoted item. Quoted
        items can include the delimiter and it will be ignored.
    quoting : int or csv.QUOTE_* instance, default 0
        Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of
        QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
    doublequote : boolean, default ``True``
       When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate
       whether or not to interpret two consecutive quotechar elements INSIDE a
       field as a single ``quotechar`` element.
    escapechar : str (length 1), default None
        One-character string used to escape delimiter when quoting is QUOTE_NONE.
    comment : str, default None
        Indicates remainder of line should not be parsed. If found at the beginning
        of a line, the line will be ignored altogether. This parameter must be a
        single character. Like empty lines (as long as ``skip_blank_lines=True``),
        fully commented lines are ignored by the parameter `header` but not by
        `skiprows`. For example, if comment='#', parsing '#empty\na,b,c\n1,2,3'
        with `header=0` will result in 'a,b,c' being
        treated as the header.
    encoding : str, default None
        Encoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Python
        standard encodings
        <https://docs.python.org/3/library/codecs.html#standard-encodings>`_
    dialect : str or csv.Dialect instance, default None
        If provided, this parameter will override values (default or not) for the
        following parameters: `delimiter`, `doublequote`, `escapechar`,
        `skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to
        override values, a ParserWarning will be issued. See csv.Dialect
        documentation for more details.
    tupleize_cols : boolean, default False
        Leave a list of tuples on columns as is (default is to convert to
        a Multi Index on the columns)
    error_bad_lines : boolean, default True
        Lines with too many fields (e.g. a csv line with too many commas) will by
        default cause an exception to be raised, and no DataFrame will be returned.
        If False, then these "bad lines" will dropped from the DataFrame that is
        returned.
    warn_bad_lines : boolean, default True
        If error_bad_lines is False, and warn_bad_lines is True, a warning for each
        "bad line" will be output.
    low_memory : boolean, default True
        Internally process the file in chunks, resulting in lower memory use
        while parsing, but possibly mixed type inference.  To ensure no mixed
        types either set False, or specify the type with the `dtype` parameter.
        Note that the entire file is read into a single DataFrame regardless,
        use the `chunksize` or `iterator` parameter to return the data in chunks.
        (Only valid with C parser)
    buffer_lines : int, default None
        DEPRECATED: this argument will be removed in a future version because its
        value is not respected by the parser
    compact_ints : boolean, default False
        DEPRECATED: this argument will be removed in a future version
    
        If compact_ints is True, then for any column that is of integer dtype,
        the parser will attempt to cast it as the smallest integer dtype possible,
        either signed or unsigned depending on the specification from the
        `use_unsigned` parameter.
    use_unsigned : boolean, default False
        DEPRECATED: this argument will be removed in a future version
    
        If integer columns are being compacted (i.e. `compact_ints=True`), specify
        whether the column should be compacted to the smallest signed or unsigned
        integer dtype.
    memory_map : boolean, default False
        If a filepath is provided for `filepath_or_buffer`, map the file object
        directly onto memory and access the data directly from there. Using this
        option can improve performance because there is no longer any I/O overhead.
    
    Returns
    -------
    result : DataFrame or TextParser

Writing to csv

The default setting is to write the indices out.

In [35]:
df.to_csv('tmp.tsv')
In [36]:
%cat tmp.tsv
,Name,Grade_Paul,Grade_John,Gender,Age,Date
0,Valentine,2,.,male,20,12:31
1,Sarah,4,4,female,22,13:20
2,Stephen,5,4,male,19,12:35
3,Jane,3,5,female,20,14:50
4,Charles,4,.,male,21,14:55

If we don't want that:

In [37]:
df.to_csv('tmp.csv',index=False)
In [38]:
%cat tmp.csv
Name,Grade_Paul,Grade_John,Gender,Age,Date
Valentine,2,.,male,20,12:31
Sarah,4,4,female,22,13:20
Stephen,5,4,male,19,12:35
Jane,3,5,female,20,14:50
Charles,4,.,male,21,14:55

We can set the column separator again by using the sep argument.

In [39]:
df.to_csv('tmp.tsv',sep='\t')
In [40]:
%cat tmp.tsv
	Name	Grade_Paul	Grade_John	Gender	Age	Date
0	Valentine	2	.	male	20	12:31
1	Sarah	4	4	female	22	13:20
2	Stephen	5	4	male	19	12:35
3	Jane	3	5	female	20	14:50
4	Charles	4	.	male	21	14:55

We can write compressed.

In [41]:
df.to_csv('tmp.csv.gz')

We can set the format of floating point numbers.

In [42]:
df.to_csv('tmp.csv',float_format='%.2f')
In [43]:
%cat tmp.csv
,Name,Grade_Paul,Grade_John,Gender,Age,Date
0,Valentine,2,.,male,20,12:31
1,Sarah,4,4,female,22,13:20
2,Stephen,5,4,male,19,12:35
3,Jane,3,5,female,20,14:50
4,Charles,4,.,male,21,14:55

Other file I/O functions

We can read Excel.

In [44]:
df=pd.read_excel('data/smallpeople.xlsx')
df
Out[44]:
Name Grade_Paul Grade_John Gender Age Date
0 Valentine 2 . male 20 12:31
1 Sarah 4 4 female 22 13:20
2 Stephen 5 4 male 19 12:35
3 Jane 3 5 female 20 14:50
4 Charles 4 . male 21 14:55

And write to Excel.

In [45]:
df.to_excel('tmp.xlsx')

Reading a dictionary

It is cery common that web APIs return results as a dictionary-like text, in the so-called JSON format. We can turn them to Python dictionaries by using the json library.

In [49]:
import json

Each line of the data/json_example file is such a dictionary, that we got from the Google Geocoding API. Load these lines into a list containing dictionaries with the following command:

In [50]:
d=[json.loads(s) for s in open("data/json_example").readlines()]
d[0:2]
Out[50]:
[{'id': '3040051',
  'query': 'les+Escaldes+AD',
  'results': [{'address_components': [{'long_name': 'Les Escaldes',
      'short_name': 'Les Escaldes',
      'types': ['locality', 'political']},
     {'long_name': 'Escaldes-Engordany',
      'short_name': 'Escaldes-Engordany',
      'types': ['administrative_area_level_1', 'political']},
     {'long_name': 'Andorra',
      'short_name': 'AD',
      'types': ['country', 'political']},
     {'long_name': 'AD700', 'short_name': 'AD700', 'types': ['postal_code']}],
    'formatted_address': 'AD700 Les Escaldes, Andorra',
    'geometry': {'bounds': {'northeast': {'lat': 42.5168669, 'lng': 1.5532685},
      'southwest': {'lat': 42.5067774, 'lng': 1.5285531}},
     'location': {'lat': 42.5100804, 'lng': 1.5387862},
     'location_type': 'APPROXIMATE',
     'viewport': {'northeast': {'lat': 42.5168669, 'lng': 1.5532685},
      'southwest': {'lat': 42.5067774, 'lng': 1.5285531}}},
    'place_id': 'ChIJaxpK9OKKpRIRtp4e8lTF3v0',
    'types': ['locality', 'political']}],
  'status': 'OK'},
 {'id': '3040051',
  'query': 'les+Escaldes+AD',
  'results': [{'address_components': [{'long_name': 'Les Escaldes',
      'short_name': 'Les Escaldes',
      'types': ['locality', 'political']},
     {'long_name': 'Escaldes-Engordany',
      'short_name': 'Escaldes-Engordany',
      'types': ['administrative_area_level_1', 'political']},
     {'long_name': 'Andorra',
      'short_name': 'AD',
      'types': ['country', 'political']},
     {'long_name': 'AD700', 'short_name': 'AD700', 'types': ['postal_code']}],
    'formatted_address': 'AD700 Les Escaldes, Andorra',
    'geometry': {'bounds': {'northeast': {'lat': 42.5168669, 'lng': 1.5532685},
      'southwest': {'lat': 42.5067774, 'lng': 1.5285531}},
     'location': {'lat': 42.5100804, 'lng': 1.5387862},
     'location_type': 'APPROXIMATE',
     'viewport': {'northeast': {'lat': 42.5168669, 'lng': 1.5532685},
      'southwest': {'lat': 42.5067774, 'lng': 1.5285531}}},
    'place_id': 'ChIJaxpK9OKKpRIRtp4e8lTF3v0',
    'types': ['locality', 'political']}],
  'status': 'OK'}]

We see, that the list elements contain the same keys, thus, it would make sense to create a table from this list.

In [51]:
pd.DataFrame.from_dict(d)
Out[51]:
id query results status
0 3040051 les+Escaldes+AD [{'types': ['locality', 'political'], 'address... OK
1 3040051 les+Escaldes+AD [{'types': ['locality', 'political'], 'address... OK
2 3040051 les+Escaldes+AD [{'types': ['locality', 'political'], 'address... OK
3 3041563 Andorra+la+Vella+AD [{'types': ['locality', 'political'], 'address... OK
4 290594 Umm+al+Qaywayn+AE [{'types': ['administrative_area_level_1', 'po... OK
5 291074 Ras+al-Khaimah+AE [{'types': ['locality', 'political'], 'address... OK
6 3040051 les+Escaldes+AD [{'types': ['locality', 'political'], 'address... OK
7 3041563 Andorra+la+Vella+AD [{'types': ['locality', 'political'], 'address... OK
8 290594 Umm+al+Qaywayn+AE [{'types': ['administrative_area_level_1', 'po... OK
9 291074 Ras+al-Khaimah+AE [{'types': ['locality', 'political'], 'address... OK
10 291696 Khawr+Fakkan+AE [{'types': ['locality', 'political'], 'address... OK
11 292223 Dubai+AE [{'types': ['locality', 'political'], 'address... OK
12 292231 Dibba+Al-Fujairah+AE [{'types': ['locality', 'political'], 'address... OK
13 292239 Dibba+Al-Hisn+AE [{'types': ['locality', 'political'], 'address... OK
14 292672 Sharjah+AE [{'types': ['locality', 'political'], 'address... OK
15 292688 Ar+Ruways+AE [{'types': ['locality', 'political'], 'address... OK
16 292878 Al+Fujayrah+AE [{'types': ['administrative_area_level_1', 'po... OK
17 292913 Al+Ain+AE [{'types': ['locality', 'political'], 'address... OK
18 292932 Ajman+AE [{'types': ['locality', 'political'], 'address... OK
19 292953 Adh+Dhayd+AE [{'types': ['locality', 'political'], 'address... OK
20 292968 Abu+Dhabi+AE [{'types': ['locality', 'political'], 'address... OK
21 1120985 Zaranj+AF [{'types': ['locality', 'political'], 'address... OK
22 1123004 Taloqan+AF [{'types': ['locality', 'political'], 'address... OK
23 1125155 Shindand+AF [{'types': ['airport', 'establishment', 'point... OK
24 1125444 Shibirghan+AF [{'types': ['locality', 'political'], 'address... OK
25 1125896 Shahrak+AF [{'types': ['administrative_area_level_2', 'po... OK
26 1127110 Sar-e+Pul+AF [{'types': ['administrative_area_level_1', 'po... OK
27 1127628 Sang-e+Charak+AF [{'types': ['administrative_area_level_2', 'po... OK
28 1127768 Aibak+AF [{'formatted_address': 'Aybak, Afghanistan', '... OK
29 1128265 Rustaq+AF [{'types': ['locality', 'political'], 'address... OK
30 1129516 Qarqin+AF [{'types': ['locality', 'political'], 'address... OK
31 1129648 Qarawul+AF [{'formatted_address': 'Hazart Imam, Afghanist... OK
32 1130490 Pul-e+Khumri+AF [{'types': ['locality', 'political'], 'address... OK
33 1131316 Paghman+AF [{'formatted_address': 'Paghman, Afghanistan',... OK
34 1132495 Nahrin+AF [{'formatted_address': 'Nahrain, Afghanistan',... OK
35 1133453 Maymana+AF [{'types': ['locality', 'political'], 'address... OK
36 1133574 Mehtar+Lam+AF [{'types': ['locality', 'political'], 'address... OK
37 1133616 Mazar-e+Sharif+AF [{'types': ['locality', 'political'], 'address... OK
38 1134720 Lashkar+Gah+AF [{'types': ['locality', 'political'], 'address... OK
39 1135158 Kushk+AF [{'formatted_address': 'Kūšk, Afghanistan', 'p... OK
40 1135689 Kunduz+AF [{'types': ['locality', 'political'], 'address... OK
41 1136469 Khost+AF [{'types': ['locality', 'political'], 'address... OK
42 1136575 Khulm+AF [{'types': ['locality', 'political'], 'address... OK
43 1136863 Khash+AF [{'formatted_address': 'Khash, Afghanistan', '... OK
44 1137168 Khanabad+AF [{'types': ['locality', 'political'], 'address... OK
45 1137807 Karukh+AF [{'formatted_address': 'Karokh, Afghanistan', ... OK
46 1138336 Kandahar+AF [{'types': ['locality', 'political'], 'address... OK
47 1138958 Kabul+AF [{'types': ['locality', 'political'], 'address... OK
48 1139715 Jalalabad+AF [{'types': ['locality', 'political'], 'address... OK
49 1139807 Jabal+os+Saraj+AF [{'types': ['locality', 'political'], 'address... OK