haberciyiz: RESEARCH PROPOSAL SAMPLE
Pinterest is to help give you the best experience we can.
This week we had in our program the concluding event of our EU-funded Learning Layers (LL) project – the Final Review. Normally such an event is organised at the premises of the respective Directorate General of the European Commission – in our case the DG Research which is located in Luxembourg. However, after our Year 2 Review Meeting the said building has been demolished and the DG Research has moved to temporary building. Therefore, also the review meetings have bee organised in such a building or elsewhere. This gave us the rise to propose that our final review would be organised at the premises of one of our application partner organisations – to give the Project Officer and the review panel a chance to get a more lively picture of the impact of our work. This proposal was accepted and we had a brief discussion on the remaining options. In general, the construction sector training centre Bau-ABC Rostrup would have liked to host such an event, but it was not possible, because in January their meeting rooms are fully booked for continuing vocational training courses. Therefore, our best option was to organise the event primarily at the Norddeutsches Zentrum für Nachhaltiges Bauen ((NZNB) – North-German Centre for Ecological Construction Work in Verden, near Bremen). Below I try to give a picture of the arrangements and the agenda of Review Meeting and how we made use of the spaces provided by the NZNB to present our work in a more dynamic and dialogue-oriented way.Making appropriate use of the spaces of the NZNBWe came to the conclusion that we should organise the first day of the review meeting around two ‘exhibition spaces’ that portray our two sectoral pilots. In addition, we would present the work of the host organisation. Therefore, we located our activities into a workshop hall (“Panzerhalle”) and into the meeting rooms above the clay and strawbale construction hall. There we had a large meeting room, part of which we then used for the two exhibition spaces. Having structured the main part of the agenda for these internal exhibitions and supporting presentations, we arranged that during the lunch break the review panel could have a chance to visit briefly the permanent exhibition of NZNB on ecological construction work in their main building. Also, we wanted to give them a brief presentation on the clay and strawbale building techniques and the courses organised in the workshop building.Presenting our work with visual images, tool demonstrations and coniverationsFor the exhibition spaces of the two sectoral pilots we had some common content and then somewhat different settings:a) As the common content we had a Mini-Poster Wall that presented all the Learning Toolbox (LTB) stacks that had been prepared for piloting or demonstration purposes.b) For the Healthcare exhibition space we had following contents and activities that were offered for free explorations:c) For the Construction exhibition space we had the following contents and spots that were offered as ‘guided tour’:Giving visibility to our application partners and to the use of LTBOne of our major points was to engage our application partners in the ‘exhibition spaces’ and in the supporting presentation sessions. For this purpose we had made arrangements to Thomas Isselhard from the network for ecological construction worj (Netzwerk Nachhaltiges Bauen) to present his ways for using Learning Toolbox in construction work. Likewise, we had invites two full-time trainers (Lehrwerkmeister) from Bau-ABC to present their initiatives for using LTB and their experiences on using it in apprentice training.During the two preparatory days we inserted most of the content to the Learning Toolbox to make the two ‘exhibition areas accessible via LTB-stacks.– – –I think this is enough of the advance planning and of the preparatory measures that we took during the two preparatory days (Monday and Tuesday) this week. It is worthwhile to note that we had arranged the accommodation of our guests in Bremen (and transports between Verden and Bremen) so that the guests could also explore Bremen in the evenings. On the final day of the event we had relocated the meeting to Bremen to make the travel arrangements easier. So, this was a brief overview on our preparations. In my three following blogs I will give more information on our presentations and on the discussions.More blogs to come …Posted in Evaluation, Informal learning, Knowledge development, Learning Layers, participation, Project, workinglearning | Comments Off on Final Review of Learning Layers – Part One: The Event and the ArrangementsLast week our EU-funded Learning Layers (LL) project had its last joint project consortium meeting (before the final review meeting) in Leeds, hosted by Leeds University, NHS and our software partner PinBellCom (latterly merged to EMIS group). This consortium meeting differed from many earlier ones because most of the work of the project has already been done. Also, quite a lot of strategic decisions concerning the final reporting had already been done. Therefore, we could concentrate on harvesting the most recent results and coordinating some preparatory processes for the final reporting. Yet, this meeting also had its salt and spices as well. In the first post I will give a brief overview on the meeting on the whole. In the second post I will focus on the picture that I/we gave on the construction sector pilot in some of the sessions.Overview on the main sessionsAfter a quick situation assessment on the current phase of the project we started working in groups and in interim plenaries to be followed by group work:Altogether we made good progress in getting a common picture, what all we have achieved and how to present it. To be sure, we have several points to be settled in a number of working meetings during the coming weeks. But the main thing is that we set the course to achieving common results in the time that is available – and we are fully engaged to make it. In the next post I will take a closer look at the work with the construction pilot in the Leeds meeting.More blogs to come … Posted in Evaluation, Informal learning, Innovation, Knowledge development, Learning Layers, mobile learning, online learning, research, workinglearning | Comments Off on Learning Layers in Leeds – Part One: Paving the way for the final runIn the beginning of September we made an important field visit in the context of our EU-funded Learning Layers (LL) project to our application partner organisation – the training centre Bau-ABC (see my blog post of 13.9.2015). On Friday some LL colleagues had a chance to make a follow-up visit to Bau-ABC, while the others were having a meeting in ITB with the visiting delegation from Singapore Workforce Development Agency. Since I was involved in the meeting in ITB, I can only report on meeting on the basis of the information from my colleague Lars Heinemann.Update 2.10.2015: I published this post some time ago as a single blog entry. Now that I got the chance to listen to the recordings of the interviews in Bau-ABC, I came to the conclusion that it is worthwhile to discuss some points of the Bau-ABC trainers in greater detail. Here again, I am also relying on the first-hand information from Lars Heinemann.The aim of the visitThe visit was planned quite some time ago as a field visit to get feedback data on the ongoing pilot testing with the Learning Toolbox (LTB). Since the LL teams of ITB and Bau-ABC could send only one participant to the LL consortium meeting in Toledo, our LL colleagues from the University of Innsbruck (UIBK), Stefan Thalmann and Markus Manhart, came to Bremen have planning meetings with us and to make field visits. However, given the very recent field visit (with the newly published Beta version of LTB), we felt that the evaluation talks were somewhat rushed. After all, the trainers had only made their first experiences in making their own stacks, pages and tiles in the LTB (to be used by other users).Talks in Bau-ABCThe visitors (Lars, Stefan and Markus) were pleased to see that their talks with the Bau-ABC trainers Markus Pape (Zimmerer = carpenter) and Lothar Schoka (Brunnenbauer = borehole builder) were well-timed and informative. Both trainers had made further efforts to familiarise themselves with the LTB Beta version. They had also made concrete plans for engaging their apprentices later in the autumn as users of LTB in their training projects. According to their information, the amount of apprentices to be involved in such pilots would be ca. 100 in both trades. As advance measure they had collected a list of volunteered users to start testing with LTB before that actual pilot.In this respect they both could give informative reports on what is going on and what is to be expected in the near future. (We expect the UIBK colleagues to share recordings of theses talks with ITB soon.)In addition to their own experiences and plans for piloting they had some urgent requests for the LTB developers. Some of these points have already been discussed with the developers, but now we got the points of the trainers from the pilot site:1) For the trainers it is important that they can send messages to groups and individuals.2) For trainers and apprentices it is important to have a notification function that alerts the apprentices when new learning materials have been made accessible and informs the trainers when apprentices have accessed the information. Moreover, both parties should be notified of replies or questions on further information.3) For trainers and apprentices it is important to have a commentary function that makes it possible to add questions or comments to texts that are used for instruction and/or documentation of learning processes.4) At the moment the LTB has been designed for Android phones and tablets – which are mostly used by the apprentices. Yet, about one third is using iOS-phones, so it is essential to proceed to iOS-versions or find alternative solutions to involve them in the pilot testing.Update 2.10.2015: I have let my initial blog post stand as it was written before listening the recordings – with one amendment. Now that I have got access to the recordings, it is interesting to have a a glimpse at some of the points made by the trainers and to relate them to our earlier interviews and discussions with them. As I see it, via such examination we learn a lot, how the fieldwork of the LL project has made progress during the years of co-design and pilot activities.More blogs to come …Posted in Evaluation, Informal learning, Innovation, Knowledge development, Learning Layers, LTB-Blogs, online learning, participation, trainers, workinglearning | Comments Off on Interim reports on LL fieldwork in Bau-ABC – Part One: Evaluation talks and plans for field testingHave been in Brussels for the last two days – speaking at 9th European Week of Regions and Cities organized by DG Regio and also taking the opportunity to join other sessions. My topic was Evaluation 2.0. Very encouraged by the positive feedback I’ve been getting all day both face-to-face and through twitter. I thought people would be generally resistant to the idea as it was fairly hard-hitting (and in fairness, some were horrified!) but far more have been interested and very positive, including quite a lot of Commission staff. However, the question now being asked by a number of them of them is “How do we progress this?” – meaning, specifically, in the context of the evaluation of Regional Policy and DG Regio intervention.Evaluation 2.0 in Regional Policy evaluation I don’t have any answers to this – in some ways, that’s not for me to decide! I have mostly used Evaluation 2.0 stuff in the evaluation of education projects not regional policy. And my recent experience of the Cohesion Fund, ERDF, IPA or any of the structural funds is minimal. However, the ideas are generic and if people think that there are some they could work with, that’s fine!That said, here are some suggestions for moving things forward – some of them are mine, most have been mooted by various people who have come to talk to me today (and bought me lots of coffee!)Suggestions for taking it forwardP.S. Message to the large numbers of English delegates at the conferenceWhen you left Heathrow yesterday to come to Brussels, I do hope you waved to the English Rugby team arriving home from the Rugby World Cup in New Zealand.(Just as well this conference was not a week later or I’d have leave a similar message for the French delegates…..)Posted in Communities of Practice, Evaluation, How to do, Innovation, integration of technology, online communities, twitter, Wales Wide Web | Comments Off on Evaluation 2.0: How do we progress it?Graham Attwell interviews Jenny Hughes about Evaluation 2.0Just what is Evaluation 2.0?Evaluation 2.0 is a set of ideas about evaluation that Pontydysgu are developing. At its simplest, it’s about using social software at all stages of the evaluation process in order to make evaluation more open, more transparent and more accessible to a wider range of stakeholders. At a theoretical level, we are trying to push forward and build on Guba and Lincoln’s ideas around 4th generation evaluation which is a constructivist approach incorporating key ideas around negotiation, multiple realities and stakeholder engagement. But this is the first part of the journey – ultimately, I believe that e-technologies are going to revolutionise the way we think about and practice evaluation.In what way do you think this is going to happen?Firstly, the use of social media gives stakeholders a real voice – irrespective of where they are located. Stakeholders can create and publish evaluation content. For example, in the past I might carry out some interviews as part of an evaluation. Sometimes I recorded it, sometimes I just made notes. Then I would try and interpret it and draw some conclusions about what it meant. Now I set up a web page for each evaluation and I podcast the interviews using audio or video and put them on the site. (Obviously this has to be negotiated with the interviewee but so far, no one has raised any objections.) There is the usual comment box so any stakeholder with access to the site can respond to the interview, add their interpretations, agree or disagree with my conclusions and so on.Secondly, I think it is challenging our perceptions of who are evaluators. Everyone is now an evaluator. Think of the software that you use every day for on-line shopping from Amazon or Ebay or any big chain store. If I want to buy a particular product I check out what other people have said about it, how many of them said it and how many stars it has been given. These are called recommender systems and I think they will have a big impact in evaluation. We have moved from the paradigm of the ‘expert’ collecting and yzing data into a world of crowd sourcing – harnessing the potential of mass data collection and interpretation.Thirdly, the explosion of Web 2.0 applications has provided us with a whole new range of evaluation tools that open up new methodological possibilities or make the old ones more efficient. For example, if I am at the stage of formulating and refining the evaluation questions – I put it out as a call on Twitter. It’s amazing how restricting evaluation questions to 140 characters can sharpen them up!I did an evaluation of a community capacity building project in an inner city area recently and spent quite a long time before I went to the first meeting walking around the streets, checking out the community facilities, the state of the housing, local amenities and so on, to get a ‘feel’ for the area – except I did it on Google Earth and with street-view on Google maps. There are about 20 or so other applications I use a lot in evaluation but maybe they will have to wait for another edition!Fourthly, I think the potential of Web 2.0 changes the way we can visualize and present data. Why are we still writing long and indigestible text-based evaluation reports? Increasingly clients are preferring short, sharp evaluation ‘articles’ on maybe one outcome of an evaluation which they can find on a ‘newsy’ evaluation webpage – with hyperlinks to more detailed information or raw data or back up evidence if they want to check it out. We can also create ‘chunks’ of evaluation reporting and repurpose them in different ways for different stakeholders or they can be localized for different cultures – for example, I have started doing executive summaries as downloadable podcasts. I think evaluation 2.0 is about creating a much wider range of evaluation products.Following on from that, I think Evaluation 2.0 breaks down the formative-summative divide and notions of ‘the mid-term report’ or ‘the ex-ante report’. Evaluation 2.0 is continuous, it is dynamic and it is interactive. For example, I use Googledocs with all my clients – I add them as readers and editors on all the folders that relate to their evaluations. At any time of the day or night they can see work in progress and add their comments. I keep their evaluation website up to date so they get evaluation information as soon as it is available.So do you think all evaluators will have to move down this road or will there always be a place for evaluators using more established methods? Personally, I think massive change is inevitable. Apart from anything else, our clients of the future will be the digital natives – they will expect it.There will always be a role for the evaluator but that role will be transformed and the skills will be different. I think a key job for the specialist evaluator will be designing the algorithms that underpin the evaluation. The evaluator will also need to be the creative director – they will need skills in informatics, in visualizing and presenting information, the creative skills to write blogs and wikis. They will need networking skills to set up and facilitate online communities of practice around different stakeholder groups and the ability to repurpose evaluation objects.The rules of engagement are also changing – in the past you engaged with a client, now you engage with a community. We also have to think how stakeholder created content might change our ideas about copyright, confidentiality, ownership, authorship.So do you think evaluators as we know them will become extinct!!Well, as Mark Halper said“Dinosaurs were highly successful and lasted a long time. They never went away. They became smaller, faster, and more agile, and now we call them birds.” Posted in Evaluation, Wales Wide Web, web 2.0 | 1 Comment » Late last year Jenny Hughes made a keynote presentation on Evaluation 2.0 for the UK Evaluation society. And pretty quickly we were getting requests for the paper of the presentation and the presentation slides. The problem is that we have not yet got round to writing the paper. And Jen, like me uses most of her canvas space for pictures not bullet points on her slides. This makes the presentation much more attractive but it is difficult sometimes to gleam the meaning from the pictures alone.So we decided we would make a slidecast of the presentation. But, half way through, we realised it wasn’t working. Lacking an audience and just speaking to the slides, it was coming over as stilted and horribly dry. So we started again and changed the format. rather than seeing it as a straightforward presentation, Jen and I just chatted about the central ideas. I think it works pretty well.We started from the question of what is Web2.0.Jen says “At its simplest, it’s about using social software at all stages of the evaluation process in order to make it more open, more transparent and more accessible to a wider range of stakeholder.” But editing the slidecast I realised we had talked a lot more than about evaluation. This chat really deals with Web 2.0 and the different ways we are developing and sharing knowledge, the differences between expert knowledge and crows sourced knowledge and new roles for teachers, trainers and evaluators resulting from the changing uses of social media.Posted in digital revolution, education 2.0, educational shift, Evaluation, Presentations, social media, Social networking, Social Software, Wales Wide Web, web 2.0 | Comments Off on Evaluation 2.0 – the SlidecastFor a report that I am working on, I have been asked to assess the impact of new technologies on teaching and learning in the vocational education sector in the UK.One major problem in judging the impact of new technologies on teaching and learning and on pedagogical approaches to teaching and learning is the need for metrics for judging such impact. it is relatively simple to survey the number of computers in a school, or the speed of an internet connection. It is also not impossible to count how many teachers are using a particular piece of technology. It is far harder to judge pedagogic change. One tool which could prove useful in this respect is the iCurriculum Framework (Barajas et al, 2004), developed by the European project of the same name.The framework was intended as a tool that can be used by educators to record the effects of their learners activities. It is based on seeing pedagogic and curricula activities along three dimensions – an Operational Curriculum, an Integrating Curriculum and a Transformational curriculum. It is possible to approach pedagogies for using technologies for learning for the same subject and for the same intended outcomes on any one of those three dimensions.In terms of general approaches suggested by research literature, most Further Education colleges in the UK are still approaching pedagogy and curriculum design from the standpoint of an operational curriculum, and although there are some examples of an integrating curriculum, there is little evidence of using technology for transformation.Reference:Barajas, M., Heinemann, L., Higueras, E., Kikis-Papakadis, K., Logofatu, B., Owen, M. et al. (2004). Guidelines for Emergent Competences at Schools, http://promitheas.iacm.forth.gr/i-curriculum/outputs.htmlPosted in Evaluation, Pedagogy, teaching and learning, Wales Wide Web | 1 Comment »A further update on planning and preparations for the PLE2010 conference. We received 81 proposals, far more than we had expected. And whilst very welcome, this has generated a lot fo work. Each proposal was assigned two reviewers from the conference Academic Committee. This has meant some members of the Committee being asked to review six papers which is quite an effort for which we are truly gratefulOne of the main points made in feedback to us from the reviewers was that a 360 word abstract is too short to make a proper judgement. And indeed some submissions did not make full use of the 360 words. We produced criteria for the submissions which were used by some reviewers. Others disagreed with this approach. Stephen Downes, commenting on my last blog post about the conference, said:In other words, it is not appropriate to ask academic reviewers to bring their expertise the material, and to then neuter that expertise with overly perspective statement of criteria.On the whole I think I agree with Stephen. But I am still concerned with how we reach some common understandings or standards for reviewing, especially in a multi-disciplinary and multi national context.Following the completion of the reviews, the conference organising committee met (via Skype) to discuss the outcomes of the process. We did not have time to properly consider the results of all 166 reviews and in the end accepted to unconditionally accept any paper with an average score of two or more (reviewers were asked to score each submission on a scale ranging from plus to minus three). That accounted for twenty six of the proposals. Each of the remaining proposals was reconsidered by the seven members of the organising committee in the light of the feedback from the reviewers. In many of the cases we agreed with their reviews, in some cases we did not. 30 of the proposals were accepted but we have asked the proposers to resubmit their abstract, feeling that improvements could be made in clarity and in explaining their ideas to potential participants at the conference.We referred nine of the proposals, in the main case because whilst they seemed interesting proposals we did not feel they has sufficiently addressed the theme of the conference ie Personal Learning Environments. These we have asked to resubmit the abstract and we will review the proposals for a second time. In a small number of cases we have recommended a change of format, particularly for research which is still at a conceptual stage which we felt would be better presented as a short paper, rather than a full proceedings paper. And, following the reviews, we did not accept five of the proposals. Once more the main reason was their failing to9 address the themes of the conference.I am sure we will have upset some people through this process. But the review process was if nothing else rigorous. the meeting to discuss the outcome lasted late into the evening the we were concerned wherever possible to be inclusive in our approach. We also decided not to use the automatic functionality of the EasyChair system for providing feedback on the proposals. the main reason for this was that we were very concerned that feedback should be helpful and constructive for all proposers. Whilst many of the reviews were very helpful in that respect, some were less so and thus we have edited those reviews.Four quick thoughts on all this:Posted in Evaluation, Mature, PLE2010, PLEs, PLN, Wales Wide Web | 1 Comment »A quick update in my series of posts on our experiences in organising the PLE2010 conference. We received 82 proposals for the conference – far more than we had expected. The strong response, I suspect, was due to three reasons: the interest in PLEs in the Technology Enhanced Learning community, the attraction of Barcelona as a venue and our success in using applications like Twitter for virally publicising the conference.Having said that – in terms of format in seems to me that some of the submissions as full conference papers would have been better made under other formats. However, present university funding requirements demand full papers and inhibit applications for work in progress or developing ideas in more appropriate formats.For the last two weeks I have been organising the review process. We promised that each submission would be blind reviewed by at least two reviewers. For this we are reliant on the freely given time and energy of our Academic Committee. And whilst reviewing can be a learning process in itself it is time consuming.Submissions have been managed through th open source Easychair system, hosted by the University of Manchester. The system is powerful, but the interfaces are far from transparent and the help somewhat minimalist! I have struggled to get the settings in the system right and some functions seem buggy – for instance the function to show missing reviews seems not to be working.Two lessons for the future seem immediately apparent. Firstly, we set the length of abstracts as a maximum of 350 words. Many of the reviewers have commented that this is too short to judge the quality of the submission.Secondly is the fraught issue of criteria for the reviews. We produced detailed guidelines for submissions based on the Creative Commons licensed Alt-C guidelines.The criteria were:However, when I sent out the papers for review, whilst I provided a link to those guidelines, I failed to copy them into the text of the emails asking for reviews. In retrospect, I should have attempted to produce a review template in EasyChair incorporating the guidelines.Even with such explicit guidelines, there is considerable room for different interpretation by reviewers. I am not sure that in our community we have a common understanding of what might be relevant to the themes of the conference or a contribution to scholarship and research into the use of PLEs for learning. I suspect this is the same for many conferences: however, the issue may be more problematic in an emergent area of education and technology practice.We also set a scale for scoring proposals:In addition we asked reviewers to state their degree of confidence in their review ranging from 4, expert, to 0, null.In over half the cases where we have received two reviews, the variation between the reviewers is no more that 1. But there are also a number of reviews with significant variation. This suggest significant differences in understandings by reviewers of the criteria – or the meaning of the criteria. it could also just be that different reviewers have different standards.In any case, we will organise a further review procedure for those submissions where there are significant differences. But I wonder if the scoring process is the best approach. To have no scoring seems to be a way fo avoiding the issue. I wonder if we should have scoring for each criteria, although this would make the review process even more complicated.I would welcome any comments on this. Whilst too late for this conference, as a community we are reliant on peer review as a quality process and collective learning and reflection may be a way of improving our work.Posted in Collaboration, Communities of Practice, Evaluation, ple, PLE2010, PLEs, Wales Wide Web | 3 Comments »Yesterday I published a self evaluation template, used by young children in a German school. It was interesting, I thought both in terms of the approach to formative evaluation – evaluation for learning rather than of learning – and in terms of the use of self evaluation as a tool for discussion between students and teachers. A number of people commented that they did not understand German and furthermore, because the file was uploaded as an image, they were unable to use online translation software.Pekka Kamarainen noticed the queries on Twitter and kindly provided me with an English translation, reproduced above.Posted in Evaluation, Pedagogy, teaching and learning, Wales Wide Web | 1 Comment »Learning about technology According to the University Technical Colleges web site, new research released of 11 to 17-year-olds, commissioned by the Baker Dearing Educational Trust, the charity which promotes and supports University Technical Colleges (UTCs), reveals that over a third (36%) have no opportunity to learn about the latest technology in the classroom and over two thirds (67%) admit that they have not had the opportunity even to discuss a new tech or app idea with a teacher.When asked about the tech skills they would like to learn the top five were:Building apps (45%) Creating Games (43%) Virtual reality (38%) Coding computer languages (34%) Artificial intelligence (28%)MOOC providers in 2016 According to Class Central a quarter of the new MOOC users in 2016 came from regional MOOC providers such as XuetangX (China) and Miríada X (Latin America).They list the top five MOOC providers by registered users:XuetangX burst onto this list making it the only non-English MOOC platform in top five.In 2016, 2,600+ new courses (vs. 1800 last year) were announced, taking the total number of courses to 6,850 from over 700 universities.Jobs in cyber security In a new fact sheet the Tech Partnership reveals that UK cyber workforce has grown by 160% in the five years to 2016. 58,000 people now work in cyber security, up from 22,000 in 2011, and they command an average salary of over £57,000 a year – 15% higher than tech specialists as a whole, and up 7% on last year. Just under half of the cyber workforce is employed in the digital industries, while banking accounts for one in five, and the public sector for 12%.Number students outside EU falls in UK Times Higher Education reports the number of first-year students from outside the European Union enrolling at UK universities fell by 1 per cent from 2014-15 to 2015-16, according to data released by the Higher Education Statistics Agency.Data from the past five years show which countries are sending fewer students to study in the UK.Despite a large increase in the number of students enrolling from China, a cohort that has grown by 12,500 since 2011-12, enrolments by students from India fell by 13,150 over the same period.Other notable changes include an increase in students from Hong Kong, Singapore and Malaysia and a fall in students from Saudi Arabia and Nigeria. Our Wikispace for teaching and learning Join our Sounds of the Bazaar Facebook goup. Just click on the logo above.We will be at Online Educa Berlin 2015. See the info above. The stream URL to play in your application is Stream URL or go to our new stream webpage here SoB Stream Page.The latest The Graham Attwell Daily! paper.li/GrahamAttwell?… Thanks to @JoanVinallCox @DrStevenGray @JohnPhilipGreen #1yrago #edchatAbout 8 hours ago from Graham Attwell's Twitter via Paper.liThe latest The Graham Attwell Daily! paper.li/GrahamAttwell?… Thanks to @kinlane @keith_wilson @walkyouhome #1yrago #edchatYesterday from Graham Attwell's Twitter via Paper.liThe latest The Graham Attwell Daily! paper.li/GrahamAttwell?… Thanks to @SaroltaGV @cbthomson @SDMumford #edchat #edtechAbout 2 days ago from Graham Attwell's Twitter via Paper.liWhich is why we need alternance programmes bringing together learning in the workplace and in vocational schools twitter.com/reddyplumbing/…About 2 days ago from Graham Attwell's Twitter via Twitter for MacI like this - but we have so many frameworks now - how do we make sense of them all - and what do they mean for practice? twitter.com/joecar/status/…About 3 days ago from Graham Attwell's Twitter via Twitter for MacLearning to Google: Understanding classed and gendered practices when young people use the Internet for research journals.sagepub.com/doi/abs/…About 4 hours ago from Cristina Costa's Twitter via Twitter for iPadRT @Neil_Selwyn new article: "Selling Tech to Teachers: Education Trade Shows as Policy Events" - (T&F paywall) - tandfonline.com/doi/full/10.1… #edtechAbout 12 hours ago from Cristina Costa's Twitter via Twitter for iPad@lisaharris @mleonurr @nic_fair send some to Scotland!About 14 hours ago from Cristina Costa's Twitter via TweetDeck@pigironjoe it all becomes a matter of (a very narrow) perspective !About 14 hours ago from Cristina Costa's Twitter via TweetDeck@pigironjoe ...but I am afraid we are moving toward the 'desintellectualisation' (french expression) of society and its agentsAbout 14 hours ago from Cristina Costa's Twitter via TweetDeck Pontydysgu – Bridge to Learning is proudly powered by WordPress Entries (RSS) and Comments (RSS).