With the growth of online text data in recent years, the research on automated dialogue systems has made more progress than before. In this paper, we propose a new model DepBERT. This model uses BERT pre-training model and integrates Syntactic Dependency Feature to extract the key features of customer and helpdesk data in the dialogue content to enhance the prediction of evaluating multiple turns of dialogue. The contribution of this research is to optimize the method of automated evaluation dialogue system. The F1-score of DepBERT has a 4% increase in customer dataset and has a 10% increase in helpdesk dataset compared to BERT, indicating that it can effectively predict the task behavior in the dialogue between the customer and the helpdesk.