{ "cells": [ { "cell_type": "raw", "id": "7d9ef05c", "metadata": {}, "source": [ "Run in Google Colab" ] }, { "cell_type": "markdown", "id": "4a2b0d98", "metadata": {}, "source": [ "# Sparse Inputs" ] }, { "cell_type": "markdown", "id": "a8eea702", "metadata": {}, "source": [ "SciKeras supports sparse inputs (`X`/features).\n", "You don't have to do anything special for this to work, you can just pass a sparse matrix to `fit()`.\n", "\n", "In this notebook, we'll demonstrate how this works and compare memory consumption of sparse inputs to dense inputs." ] }, { "cell_type": "markdown", "id": "f3685580", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": 1, "id": "f05b12b6", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T13:56:55.099352Z", "iopub.status.busy": "2022-07-27T13:56:55.098887Z", "iopub.status.idle": "2022-07-27T13:56:57.817811Z", "shell.execute_reply": "2022-07-27T13:56:57.817107Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Collecting memory_profiler\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " Downloading memory_profiler-0.60.0.tar.gz (38 kB)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " Preparing metadata (setup.py) ... \u001b[?25l-" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\b \bdone\r\n", "\u001b[?25hRequirement already satisfied: psutil in /home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages (from memory_profiler) (5.9.1)\r\n", "Building wheels for collected packages: memory_profiler\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " Building wheel for memory_profiler (setup.py) ... \u001b[?25l-" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\b \b\\" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\b \bdone\r\n", "\u001b[?25h Created wheel for memory_profiler: filename=memory_profiler-0.60.0-py3-none-any.whl size=31267 sha256=0765a2d00950163b8bdc91ec344d3120bcc9a17e9c0d4bf0e947ff2740e15f29\r\n", " Stored in directory: /home/runner/.cache/pip/wheels/01/ca/8b/b518dd2aef69635ad6fcab87069c9c52f355a2e9c5d4c02da9\r\n", "Successfully built memory_profiler\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Installing collected packages: memory_profiler\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Successfully installed memory_profiler-0.60.0\r\n" ] } ], "source": [ "!pip install memory_profiler\n", "%load_ext memory_profiler" ] }, { "cell_type": "code", "execution_count": 2, "id": "c915f896", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T13:56:57.823872Z", "iopub.status.busy": "2022-07-27T13:56:57.822674Z", "iopub.status.idle": "2022-07-27T13:56:59.500437Z", "shell.execute_reply": "2022-07-27T13:56:59.499785Z" } }, "outputs": [], "source": [ "import warnings\n", "import os\n", "os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n", "from tensorflow import get_logger\n", "get_logger().setLevel('ERROR')\n", "warnings.filterwarnings(\"ignore\", message=\"Setting the random state for TF\")" ] }, { "cell_type": "code", "execution_count": 3, "id": "f3ee23f3", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T13:56:59.505858Z", "iopub.status.busy": "2022-07-27T13:56:59.504549Z", "iopub.status.idle": "2022-07-27T13:56:59.524062Z", "shell.execute_reply": "2022-07-27T13:56:59.523441Z" } }, "outputs": [], "source": [ "try:\n", " import scikeras\n", "except ImportError:\n", " !python -m pip install scikeras" ] }, { "cell_type": "code", "execution_count": 4, "id": "faf130e5", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T13:56:59.530291Z", "iopub.status.busy": "2022-07-27T13:56:59.528594Z", "iopub.status.idle": "2022-07-27T13:56:59.843487Z", "shell.execute_reply": "2022-07-27T13:56:59.842834Z" } }, "outputs": [], "source": [ "import scipy\n", "import numpy as np\n", "from scikeras.wrappers import KerasRegressor\n", "from sklearn.preprocessing import OneHotEncoder\n", "from sklearn.pipeline import Pipeline\n", "from tensorflow import keras" ] }, { "cell_type": "markdown", "id": "a16e80a2", "metadata": {}, "source": [ "## Data\n", "\n", "The dataset we'll be using is designed to demostrate a worst-case/best-case scenario for dense and sparse input features respectively.\n", "It consists of a single categorical feature with equal number of categories as rows.\n", "This means the one-hot encoded representation will require as many columns as it does rows, making it very ineffienct to store as a dense matrix but very efficient to store as a sparse matrix." ] }, { "cell_type": "code", "execution_count": 5, "id": "218766b5", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T13:56:59.849146Z", "iopub.status.busy": "2022-07-27T13:56:59.847546Z", "iopub.status.idle": "2022-07-27T13:56:59.853325Z", "shell.execute_reply": "2022-07-27T13:56:59.852838Z" } }, "outputs": [], "source": [ "N_SAMPLES = 20_000 # hand tuned to be ~4GB peak\n", "\n", "X = np.arange(0, N_SAMPLES).reshape(-1, 1)\n", "y = np.random.uniform(0, 1, size=(X.shape[0],))" ] }, { "cell_type": "markdown", "id": "b68360b6", "metadata": {}, "source": [ "## Model\n", "\n", "The model here is nothing special, just a basic multilayer perceptron with one hidden layer." ] }, { "cell_type": "code", "execution_count": 6, "id": "f66985a1", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T13:56:59.857463Z", "iopub.status.busy": "2022-07-27T13:56:59.856443Z", "iopub.status.idle": "2022-07-27T13:56:59.861484Z", "shell.execute_reply": "2022-07-27T13:56:59.861017Z" } }, "outputs": [], "source": [ "def get_clf(meta) -> keras.Model:\n", " n_features_in_ = meta[\"n_features_in_\"]\n", " model = keras.models.Sequential()\n", " model.add(keras.layers.Input(shape=(n_features_in_,)))\n", " # a single hidden layer\n", " model.add(keras.layers.Dense(100, activation=\"relu\"))\n", " model.add(keras.layers.Dense(1))\n", " return model" ] }, { "cell_type": "markdown", "id": "2452a493", "metadata": {}, "source": [ "## Pipelines\n", "\n", "Here is where it gets interesting.\n", "We make two Scikit-Learn pipelines that use `OneHotEncoder`: one that uses `sparse=False` to force a dense matrix as the output and another that uses `sparse=True` (the default)." ] }, { "cell_type": "code", "execution_count": 7, "id": "2c4b52e2", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T13:56:59.865535Z", "iopub.status.busy": "2022-07-27T13:56:59.864511Z", "iopub.status.idle": "2022-07-27T13:56:59.869600Z", "shell.execute_reply": "2022-07-27T13:56:59.869135Z" } }, "outputs": [], "source": [ "dense_pipeline = Pipeline(\n", " [\n", " (\"encoder\", OneHotEncoder(sparse=False)),\n", " (\"model\", KerasRegressor(get_clf, loss=\"mse\", epochs=5, verbose=False))\n", " ]\n", ")\n", "\n", "sparse_pipeline = Pipeline(\n", " [\n", " (\"encoder\", OneHotEncoder(sparse=True)),\n", " (\"model\", KerasRegressor(get_clf, loss=\"mse\", epochs=5, verbose=False))\n", " ]\n", ")" ] }, { "cell_type": "markdown", "id": "c0742dab", "metadata": {}, "source": [ "## Benchmark\n", "\n", "Our benchmark will be to just train each one of these pipelines and measure peak memory consumption." ] }, { "cell_type": "code", "execution_count": 8, "id": "d949fc0b", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T13:56:59.873619Z", "iopub.status.busy": "2022-07-27T13:56:59.872604Z", "iopub.status.idle": "2022-07-27T13:58:16.077906Z", "shell.execute_reply": "2022-07-27T13:58:16.077061Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "peak memory: 3469.38 MiB, increment: 3089.32 MiB\n" ] } ], "source": [ "%memit dense_pipeline.fit(X, y)" ] }, { "cell_type": "code", "execution_count": 9, "id": "97320f3b", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T13:58:16.084788Z", "iopub.status.busy": "2022-07-27T13:58:16.083201Z", "iopub.status.idle": "2022-07-27T13:58:36.918778Z", "shell.execute_reply": "2022-07-27T13:58:36.917788Z" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor(\"gradient_tape/sequential_1/dense_2/embedding_lookup_sparse/Reshape_1:0\", shape=(None,), dtype=int32), values=Tensor(\"gradient_tape/sequential_1/dense_2/embedding_lookup_sparse/Reshape:0\", shape=(None, 100), dtype=float32), dense_shape=Tensor(\"gradient_tape/sequential_1/dense_2/embedding_lookup_sparse/Cast:0\", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.\n", " warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "peak memory: 659.04 MiB, increment: 34.25 MiB\n" ] } ], "source": [ "%memit sparse_pipeline.fit(X, y)" ] }, { "cell_type": "markdown", "id": "8fcf20fe", "metadata": {}, "source": [ "You should see at least 100x more memory consumption **increment** in the dense pipeline." ] }, { "cell_type": "markdown", "id": "37328415", "metadata": {}, "source": [ "### Runtime\n", "\n", "Using sparse inputs can have a drastic impact on memory usage, but it often (not always) hurts overall runtime." ] }, { "cell_type": "code", "execution_count": 10, "id": "561b3d47", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T13:58:36.923687Z", "iopub.status.busy": "2022-07-27T13:58:36.923290Z", "iopub.status.idle": "2022-07-27T14:08:10.668211Z", "shell.execute_reply": "2022-07-27T14:08:10.666605Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1min 12s ± 7.28 s per loop (mean ± std. dev. of 7 runs, 1 loop each)\n" ] } ], "source": [ "%timeit dense_pipeline.fit(X, y)" ] }, { "cell_type": "code", "execution_count": 11, "id": "d70fc955", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T14:08:10.691546Z", "iopub.status.busy": "2022-07-27T14:08:10.690797Z", "iopub.status.idle": "2022-07-27T14:09:59.844889Z", "shell.execute_reply": "2022-07-27T14:09:59.844218Z" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor(\"gradient_tape/sequential_10/dense_20/embedding_lookup_sparse/Reshape_1:0\", shape=(None,), dtype=int32), values=Tensor(\"gradient_tape/sequential_10/dense_20/embedding_lookup_sparse/Reshape:0\", shape=(None, 100), dtype=float32), dense_shape=Tensor(\"gradient_tape/sequential_10/dense_20/embedding_lookup_sparse/Cast:0\", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.\n", " warnings.warn(\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor(\"gradient_tape/sequential_11/dense_22/embedding_lookup_sparse/Reshape_1:0\", shape=(None,), dtype=int32), values=Tensor(\"gradient_tape/sequential_11/dense_22/embedding_lookup_sparse/Reshape:0\", shape=(None, 100), dtype=float32), dense_shape=Tensor(\"gradient_tape/sequential_11/dense_22/embedding_lookup_sparse/Cast:0\", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.\n", " warnings.warn(\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor(\"gradient_tape/sequential_12/dense_24/embedding_lookup_sparse/Reshape_1:0\", shape=(None,), dtype=int32), values=Tensor(\"gradient_tape/sequential_12/dense_24/embedding_lookup_sparse/Reshape:0\", shape=(None, 100), dtype=float32), dense_shape=Tensor(\"gradient_tape/sequential_12/dense_24/embedding_lookup_sparse/Cast:0\", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.\n", " warnings.warn(\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor(\"gradient_tape/sequential_13/dense_26/embedding_lookup_sparse/Reshape_1:0\", shape=(None,), dtype=int32), values=Tensor(\"gradient_tape/sequential_13/dense_26/embedding_lookup_sparse/Reshape:0\", shape=(None, 100), dtype=float32), dense_shape=Tensor(\"gradient_tape/sequential_13/dense_26/embedding_lookup_sparse/Cast:0\", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.\n", " warnings.warn(\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor(\"gradient_tape/sequential_14/dense_28/embedding_lookup_sparse/Reshape_1:0\", shape=(None,), dtype=int32), values=Tensor(\"gradient_tape/sequential_14/dense_28/embedding_lookup_sparse/Reshape:0\", shape=(None, 100), dtype=float32), dense_shape=Tensor(\"gradient_tape/sequential_14/dense_28/embedding_lookup_sparse/Cast:0\", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.\n", " warnings.warn(\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor(\"gradient_tape/sequential_15/dense_30/embedding_lookup_sparse/Reshape_1:0\", shape=(None,), dtype=int32), values=Tensor(\"gradient_tape/sequential_15/dense_30/embedding_lookup_sparse/Reshape:0\", shape=(None, 100), dtype=float32), dense_shape=Tensor(\"gradient_tape/sequential_15/dense_30/embedding_lookup_sparse/Cast:0\", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.\n", " warnings.warn(\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor(\"gradient_tape/sequential_16/dense_32/embedding_lookup_sparse/Reshape_1:0\", shape=(None,), dtype=int32), values=Tensor(\"gradient_tape/sequential_16/dense_32/embedding_lookup_sparse/Reshape:0\", shape=(None, 100), dtype=float32), dense_shape=Tensor(\"gradient_tape/sequential_16/dense_32/embedding_lookup_sparse/Cast:0\", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.\n", " warnings.warn(\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor(\"gradient_tape/sequential_17/dense_34/embedding_lookup_sparse/Reshape_1:0\", shape=(None,), dtype=int32), values=Tensor(\"gradient_tape/sequential_17/dense_34/embedding_lookup_sparse/Reshape:0\", shape=(None, 100), dtype=float32), dense_shape=Tensor(\"gradient_tape/sequential_17/dense_34/embedding_lookup_sparse/Cast:0\", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.\n", " warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "13.7 s ± 754 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n" ] } ], "source": [ "%timeit sparse_pipeline.fit(X, y)" ] }, { "cell_type": "markdown", "id": "dafe2955", "metadata": {}, "source": [ "## Tensorflow Datasets\n", "\n", "Tensorflow provides a whole suite of functionality around the [Dataset].\n", "Datasets are lazily evaluated, can be sparse and minimize the transformations required to feed data into the model.\n", "They are _a lot_ more performant and efficient at scale than using numpy datastructures, even sparse ones.\n", "\n", "SciKeras does not (and cannot) support Datasets directly because Scikit-Learn itself does not support them and SciKeras' outwards API is Scikit-Learn's API.\n", "You may want to explore breaking out of SciKeras and just using TensorFlow/Keras directly to see if Datasets can have a large impact for your use case.\n", "\n", "[Dataset]: https://www.tensorflow.org/api_docs/python/tf/data/Dataset" ] }, { "cell_type": "markdown", "id": "2e1e4ada", "metadata": {}, "source": [ "## Bonus: dtypes\n", "\n", "You might be able to save even more memory by changing the output dtype of `OneHotEncoder`." ] }, { "cell_type": "code", "execution_count": 12, "id": "557e74c0", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T14:09:59.852466Z", "iopub.status.busy": "2022-07-27T14:09:59.851952Z", "iopub.status.idle": "2022-07-27T14:09:59.856240Z", "shell.execute_reply": "2022-07-27T14:09:59.855580Z" } }, "outputs": [], "source": [ "sparse_pipline_uint8 = Pipeline(\n", " [\n", " (\"encoder\", OneHotEncoder(sparse=True, dtype=np.uint8)),\n", " (\"model\", KerasRegressor(get_clf, loss=\"mse\", epochs=5, verbose=False))\n", " ]\n", ")" ] }, { "cell_type": "code", "execution_count": 13, "id": "9d97c29c", "metadata": { "execution": { "iopub.execute_input": "2022-07-27T14:09:59.860183Z", "iopub.status.busy": "2022-07-27T14:09:59.859646Z", "iopub.status.idle": "2022-07-27T14:10:16.048731Z", "shell.execute_reply": "2022-07-27T14:10:16.047630Z" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/runner/work/scikeras/scikeras/.venv/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor(\"gradient_tape/sequential_18/dense_36/embedding_lookup_sparse/Reshape_1:0\", shape=(None,), dtype=int32), values=Tensor(\"gradient_tape/sequential_18/dense_36/embedding_lookup_sparse/Reshape:0\", shape=(None, 100), dtype=float32), dense_shape=Tensor(\"gradient_tape/sequential_18/dense_36/embedding_lookup_sparse/Cast:0\", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.\n", " warnings.warn(\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "peak memory: 1201.53 MiB, increment: 1.71 MiB\n" ] } ], "source": [ "%memit sparse_pipline_uint8.fit(X, y)" ] } ], "metadata": { "jupytext": { "formats": "ipynb,md" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.13" } }, "nbformat": 4, "nbformat_minor": 5 }