GitHub Copilot Chat Context and Efficiency

Typed “hi” into GitHub Copilot Chat inside VSCode and the logs were eye-opening. That tiny greeting triggered a request carrying ~18,000 prompt tokens. Not because “hi” is expensive — because context is. Even simple prompts can include a large background payload such as: • Tool definitions — what the assistant can access (search, edit, terminal, git, notebooks, etc.) • Instruction layers — rules for how the agent should behave • Project context — workspace structure and relevant files • Active focus — open tabs, selected code, current editor state • Memory/state — prior chat history, preferences, session context Different products implement this differently, but the pattern is consistent. Some tips on keeping context efficient: • We tend to accumulate tools over time. Periodically audit them and keep only high-value tools always enabled. • Use Skills, since they are invoked only when relevant instead of staying always-on • Keep your workspace focused when asking questions Every token saved can create more room for useful context. 🙂 #GitHub #Copilot #VSCode #AI #Assistant #Coding #DeveloperTools #LLM #GenAI

To view or add a comment, sign in

Explore content categories