How to feed LSTM with different input array sizes?2019 Community Moderator ElectionLSTM input in KerasKeras:...

Is there a familial term for apples and pears?

What Brexit solution does the DUP want?

Can a German sentence have two subjects?

I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine

Does the radius of the Spirit Guardians spell depend on the size of the caster?

Can I interfere when another PC is about to be attacked?

Can Medicine checks be used, with decent rolls, to completely mitigate the risk of death from ongoing damage?

New order #4: World

Calculus Optimization - Point on graph closest to given point

Download, install and reboot computer at night if needed

N.B. ligature in Latex

least quadratic residue under GRH: an EXPLICIT bound

Could a US political party gain complete control over the government by removing checks & balances?

How to determine if window is maximised or minimised from bash script

Chess with symmetric move-square

Is it legal to have the "// (c) 2019 John Smith" header in all files when there are hundreds of contributors?

Infinite past with a beginning?

Patience, young "Padovan"

how to create a data type and make it available in all Databases?

Can you lasso down a wizard who is using the Levitate spell?

What is the logic behind how bash tests for true/false?

Draw simple lines in Inkscape

XeLaTeX and pdfLaTeX ignore hyphenation

When blogging recipes, how can I support both readers who want the narrative/journey and ones who want the printer-friendly recipe?



How to feed LSTM with different input array sizes?



2019 Community Moderator ElectionLSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?












3












$begingroup$


If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



I am using Keras implementation of LSTM.










share|improve this question









$endgroup$

















    3












    $begingroup$


    If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



    For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



    I am using Keras implementation of LSTM.










    share|improve this question









    $endgroup$















      3












      3








      3


      2



      $begingroup$


      If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



      For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



      I am using Keras implementation of LSTM.










      share|improve this question









      $endgroup$




      If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



      For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



      I am using Keras implementation of LSTM.







      keras lstm






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 22 hours ago









      user145959user145959

      1438




      1438






















          2 Answers
          2






          active

          oldest

          votes


















          2












          $begingroup$

          The easiest way is to use Padding and Masking.



          There are three general ways to handle variable-length sequences:




          1. Padding and masking (which can be used for (3)),

          2. Batch size = 1, and

          3. Batch size > 1, with equi-length samples in each batch.


          Padding and masking



          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



          X = [

          [[1, 1.1],
          [0.9, 0.95]], # sequence 1 (2 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)

          ]


          will be converted to



          X2 = [

          [[1, 1.1],
          [0.9, 0.95],
          [-10, -10]], # padded sequence 1 (3 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)
          ]


          This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



          For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



          model.add(LSTM(units, input_shape=(None, dimension)))


          this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



          I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
          model.add(LSTM(lstm_units))


          where first dimension of input_shape in Masking is again None to allow batches with different lengths.



          Here is the code for cases (1) and (2):



          from keras import Sequential
          from keras.utils import Sequence
          from keras.layers import LSTM, Dense, Masking
          import numpy as np


          class MyBatchGenerator(Sequence):
          'Generates data for Keras'
          def __init__(self, X, y, batch_size=1, shuffle=True):
          'Initialization'
          self.X = X
          self.y = y
          self.batch_size = batch_size
          self.shuffle = shuffle
          self.on_epoch_end()

          def __len__(self):
          'Denotes the number of batches per epoch'
          return int(np.floor(len(self.y)/self.batch_size))

          def __getitem__(self, index):
          return self.__data_generation(index)

          def on_epoch_end(self):
          'Shuffles indexes after each epoch'
          self.indexes = np.arange(len(self.y))
          if self.shuffle == True:
          np.random.shuffle(self.indexes)

          def __data_generation(self, index):
          Xb = np.empty((self.batch_size, *X[index].shape))
          yb = np.empty((self.batch_size, *y[index].shape))
          # naively use the same sample over and over again
          for s in range(0, self.batch_size):
          Xb[s] = X[index]
          yb[s] = y[index]
          return Xb, yb


          # Parameters
          N = 1000
          halfN = int(N/2)
          dimension = 2
          lstm_units = 3

          # Data
          np.random.seed(123) # to generate the same numbers
          # create sequence lengths between 1 to 10
          seq_lens = np.random.randint(1, 10, halfN)
          X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_zero = np.zeros((halfN, 1))
          X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_one = np.ones((halfN, 1))
          p = np.random.permutation(N) # to shuffle zero and one classes
          X = np.concatenate((X_zero, X_one))[p]
          y = np.concatenate((y_zero, y_one))[p]

          # Batch = 1
          model = Sequential()
          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
          model.add(Dense(1, activation='sigmoid'))
          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model.summary())
          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

          # Padding and Masking
          special_value = -10.0
          max_seq_len = max(seq_lens)
          Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
          for s, x in enumerate(X):
          seq_len = x.shape[0]
          Xpad[s, 0:seq_len, :] = x
          model2 = Sequential()
          model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
          model2.add(LSTM(lstm_units))
          model2.add(Dense(1, activation='sigmoid'))
          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model2.summary())
          model2.fit(Xpad, y, epochs=50, batch_size=32)


          Extra notes




          1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.






          share|improve this answer











          $endgroup$













          • $begingroup$
            Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
            $endgroup$
            – user145959
            9 hours ago










          • $begingroup$
            @user145959 my pleasure! I added a note at the end.
            $endgroup$
            – Esmailian
            7 hours ago










          • $begingroup$
            Wow a great answer! It's called bucketing, right?
            $endgroup$
            – Aditya
            3 hours ago





















          1












          $begingroup$

          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )



          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.







          share|improve this answer









          $endgroup$













          • $begingroup$
            Padding everything to a fixed length is wastage of space.
            $endgroup$
            – Aditya
            3 hours ago












          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "557"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$

          The easiest way is to use Padding and Masking.



          There are three general ways to handle variable-length sequences:




          1. Padding and masking (which can be used for (3)),

          2. Batch size = 1, and

          3. Batch size > 1, with equi-length samples in each batch.


          Padding and masking



          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



          X = [

          [[1, 1.1],
          [0.9, 0.95]], # sequence 1 (2 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)

          ]


          will be converted to



          X2 = [

          [[1, 1.1],
          [0.9, 0.95],
          [-10, -10]], # padded sequence 1 (3 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)
          ]


          This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



          For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



          model.add(LSTM(units, input_shape=(None, dimension)))


          this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



          I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
          model.add(LSTM(lstm_units))


          where first dimension of input_shape in Masking is again None to allow batches with different lengths.



          Here is the code for cases (1) and (2):



          from keras import Sequential
          from keras.utils import Sequence
          from keras.layers import LSTM, Dense, Masking
          import numpy as np


          class MyBatchGenerator(Sequence):
          'Generates data for Keras'
          def __init__(self, X, y, batch_size=1, shuffle=True):
          'Initialization'
          self.X = X
          self.y = y
          self.batch_size = batch_size
          self.shuffle = shuffle
          self.on_epoch_end()

          def __len__(self):
          'Denotes the number of batches per epoch'
          return int(np.floor(len(self.y)/self.batch_size))

          def __getitem__(self, index):
          return self.__data_generation(index)

          def on_epoch_end(self):
          'Shuffles indexes after each epoch'
          self.indexes = np.arange(len(self.y))
          if self.shuffle == True:
          np.random.shuffle(self.indexes)

          def __data_generation(self, index):
          Xb = np.empty((self.batch_size, *X[index].shape))
          yb = np.empty((self.batch_size, *y[index].shape))
          # naively use the same sample over and over again
          for s in range(0, self.batch_size):
          Xb[s] = X[index]
          yb[s] = y[index]
          return Xb, yb


          # Parameters
          N = 1000
          halfN = int(N/2)
          dimension = 2
          lstm_units = 3

          # Data
          np.random.seed(123) # to generate the same numbers
          # create sequence lengths between 1 to 10
          seq_lens = np.random.randint(1, 10, halfN)
          X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_zero = np.zeros((halfN, 1))
          X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_one = np.ones((halfN, 1))
          p = np.random.permutation(N) # to shuffle zero and one classes
          X = np.concatenate((X_zero, X_one))[p]
          y = np.concatenate((y_zero, y_one))[p]

          # Batch = 1
          model = Sequential()
          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
          model.add(Dense(1, activation='sigmoid'))
          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model.summary())
          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

          # Padding and Masking
          special_value = -10.0
          max_seq_len = max(seq_lens)
          Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
          for s, x in enumerate(X):
          seq_len = x.shape[0]
          Xpad[s, 0:seq_len, :] = x
          model2 = Sequential()
          model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
          model2.add(LSTM(lstm_units))
          model2.add(Dense(1, activation='sigmoid'))
          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model2.summary())
          model2.fit(Xpad, y, epochs=50, batch_size=32)


          Extra notes




          1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.






          share|improve this answer











          $endgroup$













          • $begingroup$
            Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
            $endgroup$
            – user145959
            9 hours ago










          • $begingroup$
            @user145959 my pleasure! I added a note at the end.
            $endgroup$
            – Esmailian
            7 hours ago










          • $begingroup$
            Wow a great answer! It's called bucketing, right?
            $endgroup$
            – Aditya
            3 hours ago


















          2












          $begingroup$

          The easiest way is to use Padding and Masking.



          There are three general ways to handle variable-length sequences:




          1. Padding and masking (which can be used for (3)),

          2. Batch size = 1, and

          3. Batch size > 1, with equi-length samples in each batch.


          Padding and masking



          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



          X = [

          [[1, 1.1],
          [0.9, 0.95]], # sequence 1 (2 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)

          ]


          will be converted to



          X2 = [

          [[1, 1.1],
          [0.9, 0.95],
          [-10, -10]], # padded sequence 1 (3 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)
          ]


          This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



          For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



          model.add(LSTM(units, input_shape=(None, dimension)))


          this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



          I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
          model.add(LSTM(lstm_units))


          where first dimension of input_shape in Masking is again None to allow batches with different lengths.



          Here is the code for cases (1) and (2):



          from keras import Sequential
          from keras.utils import Sequence
          from keras.layers import LSTM, Dense, Masking
          import numpy as np


          class MyBatchGenerator(Sequence):
          'Generates data for Keras'
          def __init__(self, X, y, batch_size=1, shuffle=True):
          'Initialization'
          self.X = X
          self.y = y
          self.batch_size = batch_size
          self.shuffle = shuffle
          self.on_epoch_end()

          def __len__(self):
          'Denotes the number of batches per epoch'
          return int(np.floor(len(self.y)/self.batch_size))

          def __getitem__(self, index):
          return self.__data_generation(index)

          def on_epoch_end(self):
          'Shuffles indexes after each epoch'
          self.indexes = np.arange(len(self.y))
          if self.shuffle == True:
          np.random.shuffle(self.indexes)

          def __data_generation(self, index):
          Xb = np.empty((self.batch_size, *X[index].shape))
          yb = np.empty((self.batch_size, *y[index].shape))
          # naively use the same sample over and over again
          for s in range(0, self.batch_size):
          Xb[s] = X[index]
          yb[s] = y[index]
          return Xb, yb


          # Parameters
          N = 1000
          halfN = int(N/2)
          dimension = 2
          lstm_units = 3

          # Data
          np.random.seed(123) # to generate the same numbers
          # create sequence lengths between 1 to 10
          seq_lens = np.random.randint(1, 10, halfN)
          X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_zero = np.zeros((halfN, 1))
          X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_one = np.ones((halfN, 1))
          p = np.random.permutation(N) # to shuffle zero and one classes
          X = np.concatenate((X_zero, X_one))[p]
          y = np.concatenate((y_zero, y_one))[p]

          # Batch = 1
          model = Sequential()
          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
          model.add(Dense(1, activation='sigmoid'))
          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model.summary())
          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

          # Padding and Masking
          special_value = -10.0
          max_seq_len = max(seq_lens)
          Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
          for s, x in enumerate(X):
          seq_len = x.shape[0]
          Xpad[s, 0:seq_len, :] = x
          model2 = Sequential()
          model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
          model2.add(LSTM(lstm_units))
          model2.add(Dense(1, activation='sigmoid'))
          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model2.summary())
          model2.fit(Xpad, y, epochs=50, batch_size=32)


          Extra notes




          1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.






          share|improve this answer











          $endgroup$













          • $begingroup$
            Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
            $endgroup$
            – user145959
            9 hours ago










          • $begingroup$
            @user145959 my pleasure! I added a note at the end.
            $endgroup$
            – Esmailian
            7 hours ago










          • $begingroup$
            Wow a great answer! It's called bucketing, right?
            $endgroup$
            – Aditya
            3 hours ago
















          2












          2








          2





          $begingroup$

          The easiest way is to use Padding and Masking.



          There are three general ways to handle variable-length sequences:




          1. Padding and masking (which can be used for (3)),

          2. Batch size = 1, and

          3. Batch size > 1, with equi-length samples in each batch.


          Padding and masking



          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



          X = [

          [[1, 1.1],
          [0.9, 0.95]], # sequence 1 (2 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)

          ]


          will be converted to



          X2 = [

          [[1, 1.1],
          [0.9, 0.95],
          [-10, -10]], # padded sequence 1 (3 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)
          ]


          This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



          For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



          model.add(LSTM(units, input_shape=(None, dimension)))


          this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



          I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
          model.add(LSTM(lstm_units))


          where first dimension of input_shape in Masking is again None to allow batches with different lengths.



          Here is the code for cases (1) and (2):



          from keras import Sequential
          from keras.utils import Sequence
          from keras.layers import LSTM, Dense, Masking
          import numpy as np


          class MyBatchGenerator(Sequence):
          'Generates data for Keras'
          def __init__(self, X, y, batch_size=1, shuffle=True):
          'Initialization'
          self.X = X
          self.y = y
          self.batch_size = batch_size
          self.shuffle = shuffle
          self.on_epoch_end()

          def __len__(self):
          'Denotes the number of batches per epoch'
          return int(np.floor(len(self.y)/self.batch_size))

          def __getitem__(self, index):
          return self.__data_generation(index)

          def on_epoch_end(self):
          'Shuffles indexes after each epoch'
          self.indexes = np.arange(len(self.y))
          if self.shuffle == True:
          np.random.shuffle(self.indexes)

          def __data_generation(self, index):
          Xb = np.empty((self.batch_size, *X[index].shape))
          yb = np.empty((self.batch_size, *y[index].shape))
          # naively use the same sample over and over again
          for s in range(0, self.batch_size):
          Xb[s] = X[index]
          yb[s] = y[index]
          return Xb, yb


          # Parameters
          N = 1000
          halfN = int(N/2)
          dimension = 2
          lstm_units = 3

          # Data
          np.random.seed(123) # to generate the same numbers
          # create sequence lengths between 1 to 10
          seq_lens = np.random.randint(1, 10, halfN)
          X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_zero = np.zeros((halfN, 1))
          X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_one = np.ones((halfN, 1))
          p = np.random.permutation(N) # to shuffle zero and one classes
          X = np.concatenate((X_zero, X_one))[p]
          y = np.concatenate((y_zero, y_one))[p]

          # Batch = 1
          model = Sequential()
          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
          model.add(Dense(1, activation='sigmoid'))
          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model.summary())
          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

          # Padding and Masking
          special_value = -10.0
          max_seq_len = max(seq_lens)
          Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
          for s, x in enumerate(X):
          seq_len = x.shape[0]
          Xpad[s, 0:seq_len, :] = x
          model2 = Sequential()
          model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
          model2.add(LSTM(lstm_units))
          model2.add(Dense(1, activation='sigmoid'))
          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model2.summary())
          model2.fit(Xpad, y, epochs=50, batch_size=32)


          Extra notes




          1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.






          share|improve this answer











          $endgroup$



          The easiest way is to use Padding and Masking.



          There are three general ways to handle variable-length sequences:




          1. Padding and masking (which can be used for (3)),

          2. Batch size = 1, and

          3. Batch size > 1, with equi-length samples in each batch.


          Padding and masking



          In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



          X = [

          [[1, 1.1],
          [0.9, 0.95]], # sequence 1 (2 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)

          ]


          will be converted to



          X2 = [

          [[1, 1.1],
          [0.9, 0.95],
          [-10, -10]], # padded sequence 1 (3 timestamps)

          [[2, 2.2],
          [1.9, 1.95],
          [1.8, 1.85]], # sequence 2 (3 timestamps)
          ]


          This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



          For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



          model.add(LSTM(units, input_shape=(None, dimension)))


          this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



          I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



          model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
          model.add(LSTM(lstm_units))


          where first dimension of input_shape in Masking is again None to allow batches with different lengths.



          Here is the code for cases (1) and (2):



          from keras import Sequential
          from keras.utils import Sequence
          from keras.layers import LSTM, Dense, Masking
          import numpy as np


          class MyBatchGenerator(Sequence):
          'Generates data for Keras'
          def __init__(self, X, y, batch_size=1, shuffle=True):
          'Initialization'
          self.X = X
          self.y = y
          self.batch_size = batch_size
          self.shuffle = shuffle
          self.on_epoch_end()

          def __len__(self):
          'Denotes the number of batches per epoch'
          return int(np.floor(len(self.y)/self.batch_size))

          def __getitem__(self, index):
          return self.__data_generation(index)

          def on_epoch_end(self):
          'Shuffles indexes after each epoch'
          self.indexes = np.arange(len(self.y))
          if self.shuffle == True:
          np.random.shuffle(self.indexes)

          def __data_generation(self, index):
          Xb = np.empty((self.batch_size, *X[index].shape))
          yb = np.empty((self.batch_size, *y[index].shape))
          # naively use the same sample over and over again
          for s in range(0, self.batch_size):
          Xb[s] = X[index]
          yb[s] = y[index]
          return Xb, yb


          # Parameters
          N = 1000
          halfN = int(N/2)
          dimension = 2
          lstm_units = 3

          # Data
          np.random.seed(123) # to generate the same numbers
          # create sequence lengths between 1 to 10
          seq_lens = np.random.randint(1, 10, halfN)
          X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_zero = np.zeros((halfN, 1))
          X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
          y_one = np.ones((halfN, 1))
          p = np.random.permutation(N) # to shuffle zero and one classes
          X = np.concatenate((X_zero, X_one))[p]
          y = np.concatenate((y_zero, y_one))[p]

          # Batch = 1
          model = Sequential()
          model.add(LSTM(lstm_units, input_shape=(None, dimension)))
          model.add(Dense(1, activation='sigmoid'))
          model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model.summary())
          model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

          # Padding and Masking
          special_value = -10.0
          max_seq_len = max(seq_lens)
          Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
          for s, x in enumerate(X):
          seq_len = x.shape[0]
          Xpad[s, 0:seq_len, :] = x
          model2 = Sequential()
          model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
          model2.add(LSTM(lstm_units))
          model2.add(Dense(1, activation='sigmoid'))
          model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
          print(model2.summary())
          model2.fit(Xpad, y, epochs=50, batch_size=32)


          Extra notes




          1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited 7 hours ago

























          answered 19 hours ago









          EsmailianEsmailian

          2,680318




          2,680318












          • $begingroup$
            Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
            $endgroup$
            – user145959
            9 hours ago










          • $begingroup$
            @user145959 my pleasure! I added a note at the end.
            $endgroup$
            – Esmailian
            7 hours ago










          • $begingroup$
            Wow a great answer! It's called bucketing, right?
            $endgroup$
            – Aditya
            3 hours ago




















          • $begingroup$
            Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
            $endgroup$
            – user145959
            9 hours ago










          • $begingroup$
            @user145959 my pleasure! I added a note at the end.
            $endgroup$
            – Esmailian
            7 hours ago










          • $begingroup$
            Wow a great answer! It's called bucketing, right?
            $endgroup$
            – Aditya
            3 hours ago


















          $begingroup$
          Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
          $endgroup$
          – user145959
          9 hours ago




          $begingroup$
          Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
          $endgroup$
          – user145959
          9 hours ago












          $begingroup$
          @user145959 my pleasure! I added a note at the end.
          $endgroup$
          – Esmailian
          7 hours ago




          $begingroup$
          @user145959 my pleasure! I added a note at the end.
          $endgroup$
          – Esmailian
          7 hours ago












          $begingroup$
          Wow a great answer! It's called bucketing, right?
          $endgroup$
          – Aditya
          3 hours ago






          $begingroup$
          Wow a great answer! It's called bucketing, right?
          $endgroup$
          – Aditya
          3 hours ago













          1












          $begingroup$

          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )



          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.







          share|improve this answer









          $endgroup$













          • $begingroup$
            Padding everything to a fixed length is wastage of space.
            $endgroup$
            – Aditya
            3 hours ago
















          1












          $begingroup$

          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )



          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.







          share|improve this answer









          $endgroup$













          • $begingroup$
            Padding everything to a fixed length is wastage of space.
            $endgroup$
            – Aditya
            3 hours ago














          1












          1








          1





          $begingroup$

          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )



          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.







          share|improve this answer









          $endgroup$



          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )



          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.








          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 19 hours ago









          Shubham PanchalShubham Panchal

          37118




          37118












          • $begingroup$
            Padding everything to a fixed length is wastage of space.
            $endgroup$
            – Aditya
            3 hours ago


















          • $begingroup$
            Padding everything to a fixed length is wastage of space.
            $endgroup$
            – Aditya
            3 hours ago
















          $begingroup$
          Padding everything to a fixed length is wastage of space.
          $endgroup$
          – Aditya
          3 hours ago




          $begingroup$
          Padding everything to a fixed length is wastage of space.
          $endgroup$
          – Aditya
          3 hours ago


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Why does my Macbook overheat and use so much CPU and energy when on YouTube?Why do so many insist on using...

          How to prevent page numbers from appearing on glossaries?How to remove a dot and a page number in the...

          Puerta de Hutt Referencias Enlaces externos Menú de navegación15°58′00″S 5°42′00″O /...