I have a binary classification problem. I use the following keras model to do my classification.
input1 = Input(shape=(25,6)) x1 = LSTM(200)(input1) input2 = Input(shape=(24,6)) x2 = LSTM(200)(input2) input3 = Input(shape=(21,6)) x3 = LSTM(200)(input3) input4 = Input(shape=(20,6)) x4 = LSTM(200)(input4) x = concatenate([x1,x2,x3,x4]) x = Dropout(0.2)(x) x = Dense(200)(x) x = Dropout(0.2)(x) output = Dense(1, activation='sigmoid')(x)
However, the results I get is extremely bad. I thought the reason is that I have too many features, thus, needs have more improved layers after the
I was also thinking if it would be helpful to used a flatten() layer after the
anyway, since I am new to deep learning, I am not so sure how to make this a better model.
I am happy to provide more details if needed.