Plant diseases must be identified promptly and accurately in order to achieve sustainable growth and a consistent yield. Although manual observation and identification is the traditional diagnostic approach, it is frequently laborious, slow, and prone to human error. Even though deep learning has advanced quickly and is now used as an automatic disease detection method for plants, it is still a probabilistic approach that is susceptible to human technology. Despite the rapid advancements in deep learning, automated plant disease detection systems have not kept pace. The major goals to be realized by this study are two. Our contributions are twofold: I worked on two problems; (1) identifying features that allow for disease classification and (2) precisely locating diseased regions within images. When we were doing the feature extraction, vgg16 was showing more accuracy score of 95.2% showing that it is able to grasp the minute details of texture and pattern of the plants in the images. For Resnet, we reach 92.1% accuracy rate; for u-net, we get 93.5%. For image segmentation, u-net fared better on other architectures, obtaining 91.6% accuracy, outperforming fully convolutional networks and segnet. Each architecture’s strength is unique, as the research points out. By our analysis, vgg16 achieves a better feature extraction because of its deep hierarchical structure, while u-net does better on segmentation because of its encoder decoder design. But resnet had great performance, just slightly bad for other models. In addition to accuracy, the study thoroughly evaluates the models' precision, sensitivity, and f1 score. These demonstrate model reliability and dependability on real world agriculture application. This is indicative of how deep learning approaches can change plant disease identification, into more efficient, less expert driven and feasible solution for common agricultural practices. Significant value is added to the growing field of agricultural technology with this study.