Welcome back to our Responsible AI series, created by the Digital Scholarship and Data Services department as part of its ongoing commitment to promoting ethical and responsible AI practices for the Hopkins community.
Artificial intelligence (AI) is becoming part of the everyday research and learning environment. From tools that summarize articles to systems that generate code or translate text, AI is changing how we work with knowledge. For many students and researchers, this shift feels both exciting and unsettling. The tools promise efficiency, but they can also feel like a black box: Where does this information come from? Can I trust it? What happens if I use it incorrectly?
Becoming a confident AI user doesn’t mean knowing every tool on the market or mastering every advanced prompt technique. Confidence comes from developing habits of critical thinking, curiosity, and responsible use. It means understanding how AI works, what it can and cannot do, and how to use it in ways that align with the values of scholarship: accuracy, transparency, fairness, and respect for others’ work.
Begin With Intention
When you reach for an AI tool, start by asking: What am I trying to accomplish?
Are you brainstorming? Checking grammar? Summarizing an article? Analyzing data? Clear goals make it easier to judge whether AI is the right fit and how you’ll measure the usefulness of its output.
For example, if your goal is to get a high-level overview of a new field, an AI tool might provide a helpful first sketch — something that gets you oriented quickly. But if your goal is to conduct a systematic literature review, you’ll need to rely on discipline-specific databases and carefully verify results.
A confident user also knows when not to use AI. Some tasks—like interpreting sensitive data, writing personal reflections, or making claims that require detailed accuracy—demand more care than these tools can provide. Intention helps you set boundaries around how AI fits into your workflow.
Keep Human Oversight Central
AI outputs are fluent, but fluency is not the same as accuracy. Tools can misrepresent findings, invent citations, or reflect subtle biases in their training data. Without careful review, errors can slip into your work unnoticed.
Confident users keep human judgment at the center:
- Verify any references or statistics against trusted sources.
- Cross-check claims against authoritative databases, publications, or experts.
- Ensure the final product reflects your own voice, perspective, and values.
Consider an example: a student asks an AI to generate references for a paper. The output looks convincing, with author names, journal titles, and page numbers, but a closer look reveals that some of the citations don’t exist. A confident AI user knows to double-check each reference, rather than assuming the tool is correct.
Hone your Prompting Skills
Working with AI is like learning a new research method: the way you ask questions shapes the answers you get. Instead of vague instructions, try framing prompts with context and specificity.
- Weak prompt: “Summarize this article”
- Stronger prompt: “Summarize this article in three bullet points, highlighting methods and key findings for a graduate-level audience”
The difference is not just length; it’s about clarity. When you are clear with AI, you help it understand your expectations, your audience, and your goals. Good prompts set boundaries and expectations for the tool, such as tone (formal vs. conversational), length (bullet points vs. a paragraph), or focus (methods vs. conclusions).
One useful way to approach this is through the CLEAR Framework, a simple method for designing strong prompts:
- Concise: Keep prompts brief and direct. Avoid extra words that confuse the model. For example, instead of “Could you please provide me with a detailed explanation of photosynthesis and its significance?” say, “Explain photosynthesis and its significance.”
- Logical: Structure your prompt in a natural, coherent order. AI responds better when the request follows a clear sequence. For instance: “List the steps in writing a research paper, beginning with topic selection and ending with proofreading.”
- Explicit: Be precise about the scope and output format. Instead of “Tell me about the French Revolution,” try: “Provide a concise overview of the French Revolution, focusing on its causes, major events, and consequences.”
- Adaptive: If the AI response isn’t useful, adjust your prompt. Add details, change the scope, or shift the audience. For example, if “Discuss social media and mental health” feels too broad, adapt it to “Examine the relationship between social media and anxiety in adolescents.”
- Reflective: After receiving an AI response, evaluate it critically. Was it accurate? Relevant? Complete? Use those insights to refine your next prompt. Reflection makes prompting an iterative skill that improves over time.
Practicing with CLEAR builds confidence. Over time, you’ll notice patterns in how different AI systems respond, and you’ll start anticipating when AI provides a useful starting point and when it may stumble. This awareness not only makes you more effective, but also more critical and able to use AI as a tool while still applying your own scholarly judgment.
Protect Privacy and Integrity
It can be tempting to paste drafts, data, or notes into AI systems. But it’s important to remember not all tools are secure, and many store or reuse the data you provide. Avoid entering confidential, sensitive, or unpublished material unless you are certain of the tool’s policies.
Protecting your own work also means thinking carefully about how you present AI’s role in your writing or research. Integrity means being transparent. If AI played a meaningful role in shaping your work, whether in a grant application, a research paper, or a classroom assignment, disclose it.
Funders like the NIH, as well as journals and publishers, are beginning to require disclosure. Even when it isn’t required, openness protects your credibility and builds trust with collaborators. Transparency also shows that you are aware of the limitations of the tools and are taking responsibility for how they are used.
Be Mindful of Bias
AI systems are trained on massive datasets, but those datasets don’t represent the world evenly. Some languages, cultures, and research areas are underrepresented. Some stereotypes and biases are amplified.
A confident user doesn’t take outputs at face value but asks: Whose knowledge is included here? Whose is missing? How might these gaps shape the results I’m seeing?
For instance, translation tools may perform well with common European languages but struggle with Indigenous or underrepresented dialects. Summarization tools may highlight perspectives common in English-language journals while overlooking important contributions from other regions.
Stay Curious and Keep Learning
The landscape of AI is changing rapidly. New tools are released every month, policies are still being written, and best practices are evolving.
Confidence comes from curiosity. Try out tools in low-stakes ways before using them for high-stakes work. Compare outputs across different platforms. Ask questions when results look strange or too good to be true. Talk with peers, instructors, and colleagues about what’s working, and what isn’t.
Treat AI literacy like any other academic skill: something you build over time through practice, reflection, and community. The more you engage thoughtfully, the more prepared you’ll be to adapt as the technology evolves.
How the Sheridan Libraries Can Help
The Digital Scholarship and Data Services team is here to support you as you experiment with these tools and learn to use them responsibly. We offer:
- Workshops and training sessions on AI topics.
- Guides and online resources that explain policies, highlight best practices, and point you to trustworthy tools.
Whether you’re curious about how AI can fit into your workflow or you’re grappling with questions of accuracy, bias, or disclosure, the Sheridan Libraries are a partner in your learning and research journey.