Designers of data-visualisation tools think deeply about their designs, and constantly question their own judgements and design decisions. They make these judgements to ascertain how to improve their ideas, such to make something suitable and fit-forpurpose. But self-reflection is often difficult, especially learners often find it difficult to critically reflect upon their own work. Therefore, there is a need to guide learners to perform appropriate critical reflections of their work, and develop skills to make better judgements. There are many visualisation computer tools and programming libraries that help users create visualisation systems, however, there are few tools or techniques to help them systematically critique or evaluate their creations, to ascertain what is good or what is bad in their designs. Learners who wish to create data-visualisation tools are missing structures and guidelines that will aid them critique their visualisations. Such critical analysis could be achieved by creating an appropriate computing tool and using metrics and heuristics to perform the judgement, or by human judgement helped by a written guide. Subsequently, this research explores structures to help humans perform better critical evaluations. First, the dissertation uses a traditional research methodology to investigate metrics in visualisation, to explore related work and investigate how metrics are used in computers to perform judgements. We design a framework that describes how and where metrics are used in the visualisation design process. Second, the focus turns to investigate how humans think and make critical judgement on designs, especially visualisation designs. We undertake an observational study where participants critique a range of objects and designs. An in-depth analysis of this observation-study is performed; through analysis and markup of this data, the work is analysed, and themes are extracted. We used a thematic analysis approach to extract these categories. We use these categories to develop our critical analysis system. Third, we followed an iterative approach to engineer our critical-evaluation system. The output system is the Critical Design Sheet that was created after much refinement and adaption by holding several think-aloud sessions to detect design problems and refine to an effective model. Fourth, an evaluation process is performed to evaluate the usability of the Critical Design Sheet with users. Testing of the reliability and earnability of the tool was achieved by the analysis of user usage data over two different cohorts of students (PhD’s and undergraduate). Fifth, an online implementation of the Critical Design Sheet was developed, which was briefly evaluated to discover if participants were satisfied with the computer version of the critical analysis system. These five parts represent the five contributions of the dissertation, respectively: the metric framework, critical thinking workshop and its analysis, the design of the Critical Design Sheet, the evaluation of the method, and finally the prototype online system. In conclusion, this dissertation provides learners and practitioners with a technique (the CDS) that has been proven to help students successfully critique their visual designs and make decision on their creations. The CDS works by breaking down the visual design into individual categories making it easier for practitioners to critique their work.