I am really an AI noob. But lets say i tried the 7b model for translations purposes with below acceptable results. Can i train the model with 1 million of translated sentences to improve the quality of the translation output?

Can you give an example on how you prompted the model? Your issue is probably related to that, but I would need an example to be sure. I've found the 7b Alpaca model [1] to work surprisingly well! Here's how you're supposed to prompt it:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction: {instruction}

### Response:

or

Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction: {instruction}

### Input: {input}

### Response:

[1] https://github.com/cocktailpeanut/dalai